Journal of Information Technology & Politics,
Год журнала:
2023,
Номер
unknown, С. 1 - 16
Опубликована: Июнь 22, 2023
The
use
of
warning
labels
on
political
advertisements
is
one
way
to
help
citizens
better
evaluate
the
source
and
veracity
messaging,
combat
harms
misinformation
social
media.
Reliance
labeling
part
a
larger
policy
push
for
greater
transparency
media
platforms
with
respect
quality
information.
In
this
study,
we
test
effectiveness
"traffic
light"
(red,
orange,
green)
as
indicia
YouTube.
an
online
experiment
(N=1,054),
seven
variations
TL-veracity
find
that
red
orange
traffic
light
placed
concurrently
start
advertisement
significantly
affect
credibility
perceptions.
Taken
together,
findings
suggest
direct-to-consumer
can
be
effective
inputs
perceptions,
but
their
depends
timing
position.
Perspectives on Psychological Science,
Год журнала:
2023,
Номер
19(2), С. 477 - 488
Опубликована: Авг. 18, 2023
Identifying
successful
approaches
for
reducing
the
belief
and
spread
of
online
misinformation
is
great
importance.
Social
media
companies
currently
rely
largely
on
professional
fact-checking
as
their
primary
mechanism
identifying
falsehoods.
However,
has
notable
limitations
regarding
coverage
speed.
In
this
article,
we
summarize
research
suggesting
that
"wisdom
crowds"
can
be
harnessed
successfully
to
help
identify
at
scale.
Despite
potential
concerns
about
abilities
laypeople
assess
information
quality,
recent
evidence
demonstrates
aggregating
judgments
groups
laypeople,
or
crowds,
effectively
low-quality
news
sources
inaccurate
posts:
Crowd
ratings
are
strongly
correlated
with
fact-checker
across
a
variety
studies
using
different
designs,
stimulus
sets,
subject
pools.
We
connect
these
experimental
findings
attempts
deploy
crowdsourced
in
field,
close
recommendations
future
directions
translating
into
effective
interventions.
We
analyze
the
spread
of
Donald
Trump’s
tweets
that
were
flagged
by
Twitter
using
two
intervention
strategies—attaching
a
warning
label
and
blocking
engagement
with
tweet
entirely.
find
while
on
certain
limited
their
diffusion,
messages
we
examined
labels
further
than
those
without
labels.
Additionally,
had
been
blocked
remained
popular
Facebook,
Instagram,
Reddit,
being
posted
more
often
garnering
visibility
either
labeled
or
received
no
at
all.
Taken
together,
our
results
emphasize
importance
considering
content
moderation
ecosystem
level.
Efforts
to
combat
misinformation
have
intensified
in
recent
years.
In
parallel,
our
scientific
understanding
of
and
information
ecosystem
has
improved.
Here,
I
propose
ways
improve
interventions
against
based
on
this
growing
body
knowledge.
First,
because
consumption
is
minimal
news
low,
more
should
aim
at
increasing
the
uptake
reliable
information.
Second,
most
people
distrust
unreliable
sources
but
fail
sufficiently
trust
sources,
there
room
than
reduce
sources.
Third,
largely
a
symptom
deeper
socio-political
problems,
try
address
these
root
causes,
such
as
by
reducing
partisan
animosity.
Fourth,
small
number
powerful
individuals
give
its
visibility,
target
‘superspreaders’.
Fifth,
false
not
necessarily
harmful
true
can
be
used
misleading
ways,
misleadingness
take
precedence
over
veracity
defining
misinformation.
Policymakers,
journalists,
researchers
would
benefit
from
considering
arguments
when
thinking
about
problem
how
tackle
it.
Journal of Online Trust and Safety,
Год журнала:
2023,
Номер
1(5)
Опубликована: Апрель 26, 2023
Professional
fact-checking
of
individual
news
headlines
is
an
effective
way
to
fight
misinformation,
but
it
not
easily
scalable,
because
cannot
keep
pace
with
the
massive
speed
at
which
content
gets
posted
on
social
media.
Here
we
provide
evidence
for
effectiveness
ratings
sources,
instead
articles.
In
a
large
pre-registered
experiment
quota-sampled
Americans,
find
that
participants
are
less
likely
share
false
(and
more
discerning
true
versus
headlines)
when
1-to-5
star
trustworthiness
were
applied
headlines.
This
both
generated
by
fact-checkers
and
laypeople
(although
effect
stronger
using
fact-checker
ratings).
We
also
observe
positive
spillover
effect:
sharing
discernment
increases
whose
source
was
rated,
presence
some
prompts
users
reflect
quality
generally.
study
suggests
displaying
information
regarding
sources
provides
scalable
approach
reducing
spread
low-quality
information.
Nature,
Год журнала:
2024,
Номер
630(8015), С. 123 - 131
Опубликована: Июнь 5, 2024
Abstract
The
financial
motivation
to
earn
advertising
revenue
has
been
widely
conjectured
be
pivotal
for
the
production
of
online
misinformation
1–4
.
Research
aimed
at
mitigating
so
far
focused
on
interventions
user
level
5–8
,
with
little
emphasis
how
supply
can
itself
countered.
Here
we
show
is
largely
financed
by
advertising,
examine
financing
affects
companies
involved,
and
outline
reducing
misinformation.
First,
find
that
websites
publish
pervasive
across
several
industries
amplified
digital
platforms
algorithmically
distribute
web.
Using
an
information-provision
experiment
9
advertise
face
substantial
backlash
from
their
consumers.
To
why
continues
monetized
despite
potential
advertisers
survey
decision-makers
companies.
We
most
are
unaware
companies’
appears
but
have
a
strong
preference
avoid
doing
so.
Moreover,
those
who
uncertain
about
company’s
role
in
increase
demand
platform-based
solution
reduce
monetizing
when
informed
amplify
placement
websites.
identify
low-cost,
scalable
information-based
incentive
misinform
counter
online.
Political Communication,
Год журнала:
2024,
Номер
41(3), С. 373 - 392
Опубликована: Фев. 11, 2024
How
often
do
political
elites
in
the
U.S.
share
low-quality
news
sources?
Are
there
differences
between
parties?
While
past
work
has
investigated
individuals
sharing
sources,
are
few
large-scale
analyses
of
quality
information
shared
by
elites.
As
rely
on
elite
cues
to
inform
their
decision-making,
officials
sites
may
increase
polarization
while
providing
legitimacy
outlets.
We
fill
this
gap
collecting
more
than
300,000
links
Facebook
members
Congress
and
measuring
how
each
party
shares
from
known
sources.
find
that
public,
Republican
considerably
Democrats,
increased
over
time.
Finally,
we
investigate
potential
mechanisms
underlying
partisan
only
Republicans
receive
engagement
when
low-
sites,
suggesting
asymmetric
incentives
Journal of Quantitative Description Digital Media,
Год журнала:
2025,
Номер
5
Опубликована: Янв. 14, 2025
Researchers
need
reliable
and
valid
tools
to
identify
cases
of
untrustworthy
information
when
studying
the
spread
misinformation
on
digital
platforms.
A
common
approach
is
assess
trustworthiness
sources
rather
than
individual
pieces
content.
One
most
widely
used
comprehensive
databases
for
source
ratings
provided
by
NewsGuard.
Since
creating
database
in
2019,
NewsGuard
has
continually
added
new
reassessed
existing
ones.
While
initially
focused
only
US,
expanded
include
from
other
countries.
In
addition
ratings,
contains
various
contextual
assessments
sources,
which
are
less
often
contemporary
research
misinformation.
this
work,
we
provide
an
analysis
content
database,
focusing
temporal
stability
completeness
its
across
countries,
as
well
usefulness
political
orientation
topics
studies.
We
find
that
coverage
have
remained
relatively
stable
since
2022,
particularly
France,
Italy,
Germany,
Canada,
with
US-based
consistently
scoring
lower
those
Additional
covered
provides
valuable
assets
characterizing
beyond
trustworthiness.
By
evaluating
over
time
potential
pitfalls
compromise
validity
using
a
tool
quantifying
information,
if
dichotomous
"trustworthy"/"untrustworthy"
labels
used.
Lastly,
recommendations
media
how
avoid
these
discuss
appropriate
use
source-level
approaches
general.
Current
interventions
to
combat
misinformation,
including
fact-checking,
media
literacy
tips,
and
coverage
of
may
have
unintended
democratic
consequences.
We
propose
that
these
increase
skepticism
toward
all
information,
accurate
information.
Across
three
online
survey
experiments
in
diverse
countries
(the
US,
Poland,
Hong
Kong,
total
N
=
6127),
we
test
the
consequences
existing
strategies
compare
them
with
alternative
against
misinformation.
examine
how
exposure
misinformation
affects
individuals'
perception
both
factual
false
as
well
their
trust
key
institutions.
Our
results
show
while
successfully
reduce
belief
they
also
negatively
impact
credibility
This
highlights
need
for
further
improved
minimize
harms
maximize
benefits
Social Science Computer Review,
Год журнала:
2024,
Номер
42(6), С. 1479 - 1504
Опубликована: Фев. 8, 2024
The
use
of
individual-level
browsing
data,
that
is,
the
records
a
person’s
visits
to
online
content
through
desktop
or
mobile
browser,
is
increasing
importance
for
social
scientists.
Browsing
data
have
characteristics
raise
many
questions
statistical
analysis,
yet
date,
little
hands-on
guidance
on
how
handle
them
exists.
Reviewing
extant
research,
and
exploring
sets
collected
by
our
four
research
teams
spanning
seven
countries
several
years,
with
over
14,000
participants
360
million
web
visits,
we
derive
recommendations
along
steps:
preprocessing
raw
data;
filtering
out
observations;
classifying
visits;
modelling
behavior.
formulate
aim
foster
best
practices
in
field,
which
so
far
has
paid
attention
justifying
decisions
researchers
need
take
when
analyzing
data.