PLoS ONE,
Journal Year:
2025,
Volume and Issue:
20(1), P. e0316258 - e0316258
Published: Jan. 15, 2025
The
Covid-19
pandemic
has
sparked
renewed
attention
to
the
risks
of
online
misinformation,
emphasizing
its
impact
on
individuals’
quality
life
through
spread
health-related
myths
and
misconceptions.
In
this
study,
we
analyze
6
years
(2016–2021)
Italian
vaccine
debate
across
diverse
social
media
platforms
(Facebook,
Instagram,
Twitter,
YouTube),
encompassing
all
major
news
sources–both
questionable
reliable.
We
first
use
symbolic
transfer
entropy
analysis
production
time-series
dynamically
determine
which
category
sources,
or
reliable,
causally
drives
agenda
vaccines.
Then,
leveraging
deep
learning
models
capable
accurately
classify
vaccine-related
content
based
conveyed
stance
discussed
topic,
respectively,
evaluate
focus
various
topics
by
sources
promoting
opposing
views
compare
resulting
user
engagement.
Our
study
uncovers
misinformation
not
as
a
parasite
ecosystem
that
merely
opposes
perspectives
offered
mainstream
media,
but
an
autonomous
force
even
overwhelming
from
latter.
While
pervasiveness
is
evident
in
significantly
higher
engagement
compared
reliable
ones
(up
11
times
median
value),
our
findings
underscore
need
for
consistent
thorough
pro-vax
coverage
counter
imbalance.
This
especially
important
sensitive
topics,
where
risk
spreading
potentially
exacerbating
negative
attitudes
toward
vaccines
higher.
have
successfully
promoted
efficacy,
reducing
anti-vax
impact,
gaps
safety
led
highest
with
content.
The Annals of the American Academy of Political and Social Science,
Journal Year:
2022,
Volume and Issue:
700(1), P. 136 - 151
Published: March 1, 2022
Much
like
a
viral
contagion,
misinformation
can
spread
rapidly
from
one
individual
to
another.
Inoculation
theory
offers
logical
basis
for
developing
psychological
“vaccine”
against
misinformation.
We
discuss
the
origins
of
inoculation
theory,
starting
with
its
roots
in
1960s
as
“vaccine
brainwash,”
and
detail
major
theoretical
practical
innovations
that
research
has
witnessed
over
years.
Specifically,
we
review
series
randomized
lab
field
studies
show
it
is
possible
preemptively
“immunize”
people
by
preexposing
them
severely
weakened
doses
techniques
underlie
production
along
ways
on
how
spot
refute
them.
evidence
interventions
developed
governments
social
media
companies
help
citizens
around
world
recognize
resist
unwanted
attempts
influence
mislead.
conclude
discussion
important
open
questions
about
effectiveness
interventions.
Vaccine,
Journal Year:
2023,
Volume and Issue:
41(5), P. 1018 - 1034
Published: Jan. 1, 2023
Misinformation
and
disinformation
around
vaccines
has
grown
in
recent
years,
exacerbated
during
the
Covid-19
pandemic.
Effective
strategies
for
countering
vaccine
misinformation
are
crucial
tackling
hesitancy.
We
conducted
a
systematic
review
to
identify
describe
communications-based
used
prevent
ameliorate
effect
of
mis-
dis-information
on
people's
attitudes
behaviours
surrounding
vaccination
(objective
1)
examined
their
effectiveness
2).
searched
CINAHL,
Web
Science,
Scopus,
MEDLINE,
Embase,
PsycInfo
MedRxiv
March
2021.
The
search
strategy
was
built
three
themes(1)
communications
media;
(2)
misinformation;
(3)
vaccines.
For
trials
addressing
objective
2,
risk
bias
assessed
using
Cochrane
randomized
tool
(RoB2).
Of
2000
identified
records,
34
eligible
studies
addressed
1,
29
which
also
2
(25
RCTs
4
before-and-after
studies).
Nine
'intervention
approaches'
were
identified;
most
focused
content
intervention
or
message
(debunking/correctional,
informational,
use
disease
images
other
'scare
tactics',
humour,
intensity,
inclusion
warnings,
communicating
weight
evidence),
while
two
delivery
(timing
source).
Some
strategies,
such
as
scare
tactics,
appear
be
ineffective
may
increase
endorsement.
Communicating
with
certainty,
rather
than
acknowledging
uncertainty
efficacy
risks,
found
backfire.
Promising
approaches
include
weight-of-evidence
scientific
consensus
related
myths,
humour
incorporating
warnings
about
encountering
misinformation.
Trying
debunk
misinformation,
informational
approaches,
had
mixed
results.
This
identifies
some
promising
communication
Interventions
should
further
evaluated
by
measuring
effects
uptake,
distal
outcomes
knowledge
attitudes,
quasi-experimental
real-life
contexts.
Nature Human Behaviour,
Journal Year:
2023,
Volume and Issue:
7(6), P. 892 - 903
Published: March 6, 2023
The
extent
to
which
belief
in
(mis)information
reflects
lack
of
knowledge
versus
a
motivation
be
accurate
is
unclear.
Here,
across
four
experiments
(n
=
3,364),
we
motivated
US
participants
by
providing
financial
incentives
for
correct
responses
about
the
veracity
true
and
false
political
news
headlines.
Financial
improved
accuracy
reduced
partisan
bias
judgements
headlines
30%,
primarily
increasing
perceived
from
opposing
party
(d
0.47).
Incentivizing
people
identify
that
would
liked
their
allies,
however,
decreased
accuracy.
Replicating
prior
work,
conservatives
were
less
at
discerning
than
liberals,
yet
closed
gap
between
liberals
52%.
A
non-financial
intervention
was
also
effective,
suggesting
motivation-based
interventions
are
scalable.
Altogether,
these
results
suggest
substantial
portion
people's
motivational
factors.
European Psychologist,
Journal Year:
2023,
Volume and Issue:
28(3), P. 189 - 205
Published: July 1, 2023
Abstract:
Developing
effective
interventions
to
counter
misinformation
is
an
urgent
goal,
but
it
also
presents
conceptual,
empirical,
and
practical
difficulties,
compounded
by
the
fact
that
research
in
its
infancy.
This
paper
provides
researchers
policymakers
with
overview
of
which
individual-level
are
likely
influence
spread
of,
susceptibility
to,
or
impact
misinformation.
We
review
evidence
for
effectiveness
four
categories
interventions:
boosting
(psychological
inoculation,
critical
thinking,
media
information
literacy);
nudging
(accuracy
primes
social
norms
nudges);
debunking
(fact-checking);
automated
content
labeling.
In
each
area,
we
assess
empirical
evidence,
key
gaps
knowledge,
considerations.
conclude
a
series
recommendations
tech
companies
ensure
comprehensive
approach
tackling
Scientific Reports,
Journal Year:
2023,
Volume and Issue:
13(1)
Published: April 8, 2023
Abstract
Misinformation
can
have
a
profound
detrimental
impact
on
populations’
wellbeing.
In
this
large
UK-based
online
experiment
(n
=
2430),
we
assessed
the
performance
of
false
tag
and
inoculation
interventions
in
protecting
against
different
forms
misinformation
(‘variants’).
While
previous
experiments
used
perception-
or
intention-based
outcome
measures,
presented
participants
with
real-life
posts
social
media
platform
simulation
measured
their
engagement,
more
ecologically
valid
approach.
Our
pre-registered
mixed-effects
models
indicated
that
both
reduced
engagement
misinformation,
but
was
most
effective.
However,
random
differences
analysis
revealed
protection
conferred
by
differed
across
posts.
Moderation
immunity
provided
is
robust
to
variation
individuals’
cognitive
reflection.
This
study
provides
novel
evidence
general
effectiveness
over
tags,
platforms’
current
Given
inoculation’s
effect
heterogeneity,
concert
will
likely
be
required
for
future
safeguarding
efforts.
Science,
Journal Year:
2024,
Volume and Issue:
384(6699)
Published: May 30, 2024
Low
uptake
of
the
COVID-19
vaccine
in
US
has
been
widely
attributed
to
social
media
misinformation.
To
evaluate
this
claim,
we
introduce
a
framework
combining
lab
experiments
(total
N
=
18,725),
crowdsourcing,
and
machine
learning
estimate
causal
effect
13,206
vaccine-related
URLs
on
vaccination
intentions
Facebook
users
(
≈
233
million).
We
that
impact
unflagged
content
nonetheless
encouraged
skepticism
was
46-fold
greater
than
misinformation
flagged
by
fact-checkers.
Although
reduced
predicted
significantly
more
when
viewed,
users’
exposure
limited.
In
contrast,
stories
highlighting
rare
deaths
after
were
among
Facebook’s
most-viewed
stories.
Our
work
emphasizes
need
scrutinize
factually
accurate
but
potentially
misleading
addition
outright
falsehoods.
AI Magazine,
Journal Year:
2024,
Volume and Issue:
45(3), P. 354 - 368
Published: Aug. 1, 2024
Abstract
Misinformation
such
as
fake
news
and
rumors
is
a
serious
threat
for
information
ecosystems
public
trust.
The
emergence
of
large
language
models
(LLMs)
has
great
potential
to
reshape
the
landscape
combating
misinformation.
Generally,
LLMs
can
be
double‐edged
sword
in
fight.
On
one
hand,
bring
promising
opportunities
misinformation
due
their
profound
world
knowledge
strong
reasoning
abilities.
Thus,
emerging
question
is:
we
utilize
combat
misinformation?
other
critical
challenge
that
easily
leveraged
generate
deceptive
at
scale.
Then,
another
important
how
LLM‐generated
In
this
paper,
first
systematically
review
history
before
advent
LLMs.
Then
illustrate
current
efforts
present
an
outlook
these
two
fundamental
questions,
respectively.
goal
survey
paper
facilitate
progress
utilizing
fighting
call
interdisciplinary
from
different
stakeholders
Journal of Media Psychology Theories Methods and Applications,
Journal Year:
2024,
Volume and Issue:
36(6), P. 397 - 409
Published: Jan. 23, 2024
Abstract:
There
has
been
substantial
scholarly
effort
to
(a)
investigate
the
psychological
underpinnings
of
why
individuals
believe
in
misinformation,
and
(b)
develop
interventions
that
hamper
their
acceptance
spread.
However,
there
is
a
lack
systematic
integration
these
two
research
lines.
We
conducted
scoping
review
empirically
tested
(N
=
176)
counteract
misinformation.
developed
an
intervention
map
analyzed
boosting,
inoculation,
identity
management,
nudging,
fact-checking
as
well
various
subdimensions.
further
examined
how
are
theoretically
derived
from
most
prominent
accounts
for
misinformation
susceptibility:
classical
motivated
reasoning.
find
majority
studies
interventions,
poorly
linked
basic
theory
not
geared
towards
reducing
Based
on
this,
we
outline
future
avenues
effective
countermeasures
against
PLoS ONE,
Journal Year:
2024,
Volume and Issue:
19(5), P. e0303183 - e0303183
Published: May 31, 2024
This
paper
presents
an
analysis
on
information
disorder
in
social
media
platforms.
The
study
employed
methods
such
as
Natural
Language
Processing,
Topic
Modeling,
and
Knowledge
Graph
building
to
gain
new
insights
into
the
phenomenon
of
fake
news
its
impact
critical
thinking
knowledge
management.
focused
four
research
questions:
1)
distribution
misinformation,
disinformation,
malinformation
across
different
platforms;
2)
recurring
themes
their
visibility;
3)
role
artificial
intelligence
authoritative
and/or
spreader
agent;
4)
strategies
for
combating
disorder.
AI
was
highlighted,
both
a
tool
fact-checking
truthiness
identification
bots,
potential
amplifier
false
narratives.
Strategies
proposed
include
improving
digital
literacy
skills
promoting
among
users.
The
popularization
of
science,
while
essential
for
making
complex
discoveries
accessible
to
the
public,
carries
significant
risks,
particularly
in
healthcare
where
misinformation
can
lead
harmful
behaviors
and
even
lethal
outcomes.
This
commentary
examines
dual
nature
science
communication,
highlighting
its
potential
foster
public
engagement
scientific
literacy
also
discussing
dangers
oversimplification
sensationalism.
Historical
contemporary
case
studies,
such
as
misrepresentation
ivermectin
during
COVID-19
pandemic
enduring
"5-Second
Rule"
myth,
illustrate
how
distorted
findings
erode
trust
institutions
fuel
conspiracy
theories.
digital
age
exacerbates
these
issues,
with
algorithms
social
media
amplifying
at
an
unprecedented
scale.
discussion
emphasizes
heightened
stakes
medical
directly
endanger
lives.
It
calls
a
balanced
approach
popularization,
advocating
transparency,
interdisciplinary
collaboration,
education
combat
misinformation.
extends
emerging
role
artificial
intelligence
healthcare,
warning
against
inflated
claims
risks
overreliance
on
unverified
AI
tools.
Ultimately,
this
underscores
need
systemic
reforms
ensure
that
communication
prioritizes
accuracy,
fosters
critical
thinking,
builds
resilience
spread
pseudoscience
disinformation.