Journal of Medical Internet Research,
Год журнала:
2024,
Номер
unknown
Опубликована: Июнь 3, 2024
Stigma
surrounding
women's
sexual
and
reproductive
health
(SRH)
often
prevents
them
from
seeking
essential
care.
In
South
Korea,
unmarried
women
face
strong
cultural
taboos,
increasing
their
risk
for
conditions
such
as
pelvic
inflammatory
disease,
infertility,
cervical
cancer.
While
many
turn
to
web-based
communities
support,
these
spaces
frequently
expose
microaggressions,
further
discouraging
access
care
worsening
risks.
We
aimed
encourage
a
safe
space
support
on
the
culturally
taboo
topic
of
SRH
by
counteracting
reducing
microaggressions.
sought
make
last-resort
supportive
preventing
fostering
coping
strategies,
educating
rather
than
solely
punishing
perpetrators.
conducted
co-design
sessions
with
14
Korean
women.
first
session,
we
introduced
term
microaggression
collaborated
participants
create
base
design
components
at
countering
second
initially
viewed
examples
comments,
then
designed
using
provided
templates
inspired
suggestions
session
finally
scenario
where
they
would
be
support.
analyzed
transcripts
inductive
deductive
methods.
Our
analysis
revealed
6
goals
addressing
educational
approaches,
characteristics
shaping
participants'
designs.
Reflective
strategies
were
supported
through
designs
that
numerically
indicate
positive
provide
holistic
views
diverse
perspectives,
helping
reassess
provocative
situations
cognitive
clarity.
Suppressive
fostered
encouraging
less-emotional
responses,
empowering
address
microaggressions
logically
without
self-blame.
Educational
approaches
emphasized
shared
awareness
providing
respectful
education
perpetrators
about
harm
words
can
cause.
Participants
suggested
counterspeech
mechanisms,
including
rephrasing
public
resources,
balance
freedom
expression.
They
also
proposed
forum-approved
experts
guide
discussions
ensure
accurate,
empathetic
responses
users
in
nuanced
effectively.
Cultural
heavily
influenced
goals.
noted
nebulous
nature
reluctance
burden
social
network,
societal
perceptions
overly
emotional-all
which
shaped
desire
enhance
logical
justification.
For
example,
preferred
tools
expert-led
comprehensive
perspectives
rationalize
experiences
while
stigma.
work
advocates
prioritizing
explanatory
over
punitive
detection
deletion
measures
individuals
discussing
stigmatized
SRH.
By
integrating
informed
counter
speech
designs,
empower
targets
allies
reflection
behavior
change
among
provides
step
toward
ultimately
seek
needed
Social Media + Society,
Год журнала:
2024,
Номер
10(2)
Опубликована: Апрель 1, 2024
The
stigmatized
nature
of
nonsuicidal
self-injury
may
render
TikTok,
a
short-form,
video-sharing
social
media
platform,
appealing
to
individuals
who
engage
in
this
behavior.
Since
community
faces
biased
scrutiny
based
on
stigmatization
surrounding
mental
health,
users
turn
which
offers
space
for
discussions
self-injury,
exchange
support,
experience
validation
with
little
fear
stigmatization,
and
facilitate
harm
reduction
strategies.
While
TikTok’s
Community
Guidelines
permit
share
personal
experiences
health
topics,
TikTok
explicitly
bans
content
that
shows,
promotes,
or
shares
plans
self-harm.
As
such,
moderate
user-generated
content,
leading
exclusion
marginalization
digital
space.
Through
semi-structured
interviews
8
analysis
150
videos,
we
explore
how
history
algorithm
self-injury.
Findings
demonstrate
understand
circumnavigate
through
hashtags,
signaling,
algospeak
maintain
visibility
while
also
circumnavigating
algorithmic
detection
the
platform.
Furthermore,
findings
emphasize
actively
self-surveillance,
self-censorship,
self-policing
create
safe
online
care.
Content
moderation,
however,
can
ultimately
hinder
progress
toward
destigmatization
Social
media
users
may
perceive
moderation
decisions
by
the
platform
differently,
which
can
lead
to
frustration
and
dropout.
This
study
investigates
users'
perceived
justice
fairness
of
online
when
they
are
exposed
various
illegal
versus
legal
scenarios,
retributive
restorative
strategies,
user-moderated
commercially
moderated
platforms.
We
conduct
an
experiment
on
200
American
social
Reddit
Twitter.
Results
show
that
delivers
higher
for
than
platforms
in
violations;
violations
ones.
discuss
opportunities
policymaking
improve
system
design.
Proceedings of the ACM on Human-Computer Interaction,
Год журнала:
2024,
Номер
8(CHI PLAY), С. 1 - 32
Опубликована: Окт. 14, 2024
Toxic
behavior
is
known
to
cause
harm
in
online
games.
Players
regularly
experience
negative,
hateful,
or
inappropriate
behavior.
Interventions,
such
as
banning
players
chat
message
filtering,
can
help
combat
toxicity
but
are
not
widely
available
even
comprehensively
studied
regarding
their
approaches
and
evaluations.
We
conducted
a
systematic
literature
review
that
provides
insights
into
the
current
state
of
interventions
literature,
outlining
strengths
shortcomings.
identified
36
qualitatively
analyzed
approaches.
describe
types
being
addressed,
entities
through
which
they
act,
methods
used
by
intervention
systems,
how
evaluated.
Our
results
provide
guidance
for
future
interventions,
design
space
based
on
systems.
Furthermore,
our
findings
highlight
gaps
e.g.,
sparsity
empirical
evaluations,
underexplored
areas
space,
enabling
researchers
explore
novel
directions
interventions.
BACKGROUND
Online
communities
that
many
women
use
for
support
and
safety
on
stigmatized
sexual
reproductive
health
issues
often
expose
them
to
microaggressions,
discouraging
necessary
medical
care.
The
discouragement
from
online
reluctance
seek
care
due
stigma
poses
significant
risks
unmarried
women,
such
as
cervical
cancer,
pelvic
inflammatory
disease,
ectopic
pregnancy,
infertility.
OBJECTIVE
In
this
study,
we
aimed
cultivate
a
resilient
safe
space
the
culturally
taboo
topic
of
by
counteracting
reducing
microaggressions.
We
sought
make
these
last-resort
spaces
truly
supportive
negative
effects
microaggressions
targets
allies’
well-being
educating
rather
than
solely
punishing
perpetrators.
METHODS
conducted
co-design
sessions
with
14
Korean
women.
first
session,
introduced
term
microaggression
collaborated
participants
create
base
design
components
at
countering
preventing
them.
second
went
through
three
stages:
initially
viewing
post-comment
samples,
then
designing
provided
templates
inspired
their
suggestions,
finally,
scenario
where
they
themselves
would
be
seeking
support.
used
inductive
deductive
methods
analyze
session
transcripts.
RESULTS
Our
analysis
revealed
six
goals
focus
coping
educational,
punitive
approaches
Goals
2,
3,
6
Goal
2
(positive
support)
(holistic
understanding)
help
reflective
providing
objective
standards
full
picture
reassess
situations,
while
3
(emotion
management)
encourages
less
emotional
responses
claims
appear
subjective
suppressive
coping.
1,
4,
5
educational
approaches.
1
(shared
knowledge)
(respectful
education)
highlight
approaches,
aiming
inform
all
users
about
impacts,
emphasizing
respectful
education
perpetrators
regarding
actions.
Additionally,
4
(expert
guidance)
involves
forum-approved
experts
leading
discussions
provide
accurate
information
critical
reactions.
Resulting
designs
reflected
participants’
culture
either
burden
social
network
or
fuel
perceptions
are
too
emotional.
CONCLUSIONS
work
advocates
prioritizing
explanatory
over
detection
deletion
measures
health.
This
shift
not
only
aids
allies
in
but
also
reflect
impact
provides
step
toward
ultimately
encouraging
needed
Online
platforms
are
increasingly
investing
significant
resources
into
the
systems
used
to
report
and
remove
unwanted
content
on
platforms.
However,
building
these
in
ways
that
strengthen
trust
seen
as
fair
by
those
who
engaging
directly
with
them
-
either
a
reporter
of
or
an
individual
having
removed
still
remains
challenge
for
many
Using
two
surveys
one
sent
individuals
have
recently
reported
identical
survey
had
from
platform
paired
logged
data
six
months
before
three
following
surveys,
we
explore
associations
between
people's
perception
fairness
Nextdoor's
moderation
system
later
behaviors
including
removals,
future
reporting,
visitations
platform.
We
find
felt
their
experience
was
relatively
more
likely
again
choose
visit
frequently
follow.
These
findings
demonstrate
connection
engagement
broadly,
pointing
towards
opportunities
build
legitimacy
through
better
design
reporting
removal.
Proceedings of the ACM on Human-Computer Interaction,
Год журнала:
2024,
Номер
8(CHI PLAY), С. 1 - 30
Опубликована: Окт. 14, 2024
AI
is
increasingly
being
used
to
moderate
player
behaviour
in
online
multiplayer
games,
working
identify
and
respond
toxic
problematic
conduct
with
greater
efficiency
accuracy
than
existing
automated
systems.
However,
little
work
has
explored
the
application
of
moderation
gaming
ecosystem,
despite
growing
ethical
concerns
about
applications
other
domains.
In
this
study,
we
conducted
2
expert
workshops
interviewed
26
players
industry
professionals
on
their
understandings,
perceptions
experiences
games.
Applying
a
metaphorical
frame
via
template
analysis,
outline
four
metaphors
that
capture
participants'
views
roles
automation
moderation:
Unreliable
Police
Force,
Unscrupulous
Governor,
Uncaring
Judge,
Untiring
Assistant.
We
discuss
these
as
exacerbating
top-down,
punitive
justice
system
around
transparency,
fairness
inclusion,
privacy,
human-AI
collaboration.
To
address
concerns,
put
forward
set
design
considerations
alternative
for