Content
moderation
is
a
critical
aspect
of
platform
governance
on
social
media
and
particular
relevance
to
addressing
the
belief
in
spread
misinformation.
However,
current
content
practices
have
been
criticized
as
unjust.
This
raises
an
important
question
–
who
do
Americans
want
deciding
whether
online
harmfully
misleading?
We
conducted
nationally
representative
conjoint
survey
experiment
(N=3,000)
which
U.S.
participants
evaluated
legitimacy
hypothetical
juries
tasked
with
evaluating
was
misleading.
These
varied
they
were
described
consisting
experts
(e.g.,
domain
experts),
laypeople
users),
or
non-juries
computer
algorithm).
also
randomized
features
jury
composition
(size,
necessary
qualifications)
engaged
discussion
during
evaluation.
Overall,
expert
more
legitimate
than
layperson
algorithm.
modifying
helped
increase
perceptions
politically
balanced
enhanced
legitimacy,
did
increased
size,
individual
juror
knowledge
qualifications,
enabling
discussion.,
Maximally
comparably
panels.
Republicans
perceived
less
compared
Democrats,
but
still
baseline
juries.
Conversely,
larger
lay
news
qualifications
across
political
spectrum.
Our
findings
shed
light
foundations
procedural
implications
for
design
systems.
Psychological Science,
Journal Year:
2024,
Volume and Issue:
35(4), P. 435 - 450
Published: March 20, 2024
The
spread
of
misinformation
is
a
pressing
societal
challenge.
Prior
work
shows
that
shifting
attention
to
accuracy
increases
the
quality
people’s
news-sharing
decisions.
However,
researchers
disagree
on
whether
accuracy-prompt
interventions
for
U.S.
Republicans/conservatives
and
partisanship
moderates
effect.
In
this
preregistered
adversarial
collaboration,
we
tested
question
using
multiverse
meta-analysis
(
k
=
21;
N
27,828).
all
70
models,
prompts
improved
sharing
discernment
among
Republicans/conservatives.
We
observed
significant
partisan
moderation
single-headline
“evaluation”
treatments
(a
critical
test
one
research
team)
such
effect
was
stronger
Democrats
than
Republicans.
not
consistently
robust
across
different
operationalizations
ideology/partisanship,
exclusion
criteria,
or
treatment
type.
Overall,
in
50%
specifications
(all
which
were
considered
other
team).
discuss
conditions
under
offer
interpretations.
Researchers
have
tested
a
variety
of
interventions
to
combat
misinformation
on
social
media
(e.g.,
accuracy
nudges,
digital
literacy
tips,
inoculation,
debunking).
These
work
via
different
psychological
mechanisms,
but
all
share
the
goals
increasing
recipients’
ability
distinguish
between
true
and
false
information
and/or
veracity
news
shared
media.
The
current
megastudy
with
33,233
US-based
participants
tests
nine
prominent
in
an
identical
setting
using
true,
false,
misleading
health
political
headlines.
We
find
that
wide
can
improve
discernment
versus
or
during
sharing
judgments.
Reducing
belief
is
goal
accomplishable
through
multiple
strategies
targeting
mechanisms.
Journal of Quantitative Description Digital Media,
Journal Year:
2025,
Volume and Issue:
5
Published: Jan. 14, 2025
Researchers
need
reliable
and
valid
tools
to
identify
cases
of
untrustworthy
information
when
studying
the
spread
misinformation
on
digital
platforms.
A
common
approach
is
assess
trustworthiness
sources
rather
than
individual
pieces
content.
One
most
widely
used
comprehensive
databases
for
source
ratings
provided
by
NewsGuard.
Since
creating
database
in
2019,
NewsGuard
has
continually
added
new
reassessed
existing
ones.
While
initially
focused
only
US,
expanded
include
from
other
countries.
In
addition
ratings,
contains
various
contextual
assessments
sources,
which
are
less
often
contemporary
research
misinformation.
this
work,
we
provide
an
analysis
content
database,
focusing
temporal
stability
completeness
its
across
countries,
as
well
usefulness
political
orientation
topics
studies.
We
find
that
coverage
have
remained
relatively
stable
since
2022,
particularly
France,
Italy,
Germany,
Canada,
with
US-based
consistently
scoring
lower
those
Additional
covered
provides
valuable
assets
characterizing
beyond
trustworthiness.
By
evaluating
over
time
potential
pitfalls
compromise
validity
using
a
tool
quantifying
information,
if
dichotomous
"trustworthy"/"untrustworthy"
labels
used.
Lastly,
recommendations
media
how
avoid
these
discuss
appropriate
use
source-level
approaches
general.
Recent
studies
have
found
promising
evidence
that
lightweight,
scalable
tips
promoting
digital
media
literacy
can
improve
the
overall
accuracy
of
social
users’
sharing
intentions
and
their
ability
to
determine
true
versus
false
headlines.
However,
existing
research
is
designed
test
entire
bundles
such
tips,
which
limits
our
practical
knowledge
about
whether
some
kinds
are
more
effective
than
others
hinders
theorize
mechanisms.
We
address
this
limitation
by
designing
experiments
in
we
randomly
assign
participants
receive
one
or
10
possible
(or
none,
a
pure
control
group)
then
indicate
extent
they
either
believe
would
share
series
posts.
find
assignment
nearly
any
improves
sharing,
but
only
drawing
attention
posts’
source
improved
discernment
(because
was
highly
diagnostic
stimulus
set).
Sharing
intent
appears
be
malleable
belief,
consistent
with
idea
fickle
processes
like
play
an
important
role
driving
behavior.
There
is
growing
concern
over
the
spread
of
misinformation
online.
One
widely
adopted
intervention
by
platforms
for
addressing
falsehoods
applying
‘warning
labels’
to
posts
deemed
inaccurate
fact-checkers.
Despite
a
rich
literature
on
correcting
after
exposure,
much
less
work
has
examined
effectiveness
warning
labels
presented
concurrent
with
exposure.
Promisingly,
existing
research
suggests
that
effectively
reduce
belief
and
misinformation.
The
size
these
beneficial
effects
depends
how
are
implemented
characteristics
content
being
labeled.
some
individual
differences,
recent
evidence
indicates
generally
effective
across
party
lines
other
demographic
characteristics.
We
discuss
potential
implications
limitations
labelling
policies
online
Content
moderators
review
problematic
content
for
technology
companies.
One
concern
about
this
critical
job
is
that
repeated
exposure
to
false
claims
could
cause
come
believe
the
very
they
are
supposed
moderate,
via
“illusory
truth
effect.”
In
a
first
field
experiment
with
global
moderation
company
(N
=
199),
we
found
while
working
as
did
indeed
increase
subsequent
belief
among
(mostly
Indian
and
Philippine)
employees.
We
then
tested
an
intervention
mitigate
effect:
inducing
accuracy
mindset.
both
general
population
samples
(N_India
997;
N_Philippines
1184)
second
professional
239),
replicate
illusory
effect
in
control
condition,
find
participants
consider
when
exposed
eliminates
any
of
on
falsehoods.
These
results
show
protective
power
mindset
generalize
non-Western
populations
moderators.
highlight
importance
interventions
ensuring
healthy
internet
everyone.
Research Square (Research Square),
Journal Year:
2024,
Volume and Issue:
unknown
Published: May 29, 2024
Abstract
Most
research
concerning
the
volume
and
spread
of
misinformation
on
internet
measures
construct
at
source
level,
identifying
a
set
specific
"fake"
news
domains
that
account
for
relatively
small
share
overall
consumption.
This
source-level
categorization
obscures
potential
factually
true
information
from
mainstream
sources
to
be
useful
in
service
false
or
misleading
narratives
—
potentially
far
more
prevalent
form
misinformation.
Using
combination
text-
network-analytic
techniques,
we
find
articles
reliable
are
co-shared
with
(i.e.
shared
by
users
who
also
shared)
social
media
significantly
likely
contain
than
same
not
co-shared.
is
consistent
strategically
re-purposing
enhance
credibility
reach
claims.
Our
frameworks
broaden
both
empirical
theoretical
scope
research.
Scientific Reports,
Journal Year:
2024,
Volume and Issue:
14(1)
Published: May 20, 2024
Nudge-based
misinformation
interventions
are
presented
as
cheap
and
effective
ways
to
reduce
the
spread
of
online.
However,
despite
online
information
environments
typically
containing
relatively
low
volumes
misinformation,
most
studies
testing
effectiveness
nudge
present
equal
proportions
true
false
information.
As
nudges
can
be
highly
context-dependent,
it
is
imperative
validate
nudge-based
in
with
more
realistic
misinformation.
The
current
study
(N
=
1387)
assessed
a
combined
accuracy
social-norm
simulated
social-media
varying
(50%,
20%,
12.5%)
relative
non-news-based
(i.e.,
"social")
intervention
was
at
improving
sharing
discernment
conditions
lower
providing
ecologically
valid
support
for
use
counter
propagation
on
social
media.