Health
misinformation,
defined
as
health-oriented
information
that
contradicts
empirically
supported
scientific
findings,
has
become
a
significant
concern
on
social
media
platforms.
In
response,
platforms
have
implemented
diverse
design
solutions
to
block
such
misinformation
or
alert
users
about
its
potential
inaccuracies.
However,
there
is
limited
knowledge
users'
perceptions
of
this
specific
type
and
the
actions
are
necessary
from
both
themselves
mitigate
proliferation.
This
paper
explores
(n
=
22)
health
misinformation.
On
basis
our
data,
we
identify
types
align
them
with
user-suggested
countermeasures.
We
point
critical
demands
for
anti-misinformation
topics,
emphasizing
transparency
sources,
immediate
presentation
information,
clarity.
Building
these
propose
series
recommendations
aid
future
development
aimed
at
counteracting
Science,
Journal Year:
2024,
Volume and Issue:
384(6699)
Published: May 30, 2024
Low
uptake
of
the
COVID-19
vaccine
in
US
has
been
widely
attributed
to
social
media
misinformation.
To
evaluate
this
claim,
we
introduce
a
framework
combining
lab
experiments
(total
N
=
18,725),
crowdsourcing,
and
machine
learning
estimate
causal
effect
13,206
vaccine-related
URLs
on
vaccination
intentions
Facebook
users
(
≈
233
million).
We
that
impact
unflagged
content
nonetheless
encouraged
skepticism
was
46-fold
greater
than
misinformation
flagged
by
fact-checkers.
Although
reduced
predicted
significantly
more
when
viewed,
users’
exposure
limited.
In
contrast,
stories
highlighting
rare
deaths
after
were
among
Facebook’s
most-viewed
stories.
Our
work
emphasizes
need
scrutinize
factually
accurate
but
potentially
misleading
addition
outright
falsehoods.
AI Magazine,
Journal Year:
2024,
Volume and Issue:
45(3), P. 354 - 368
Published: Aug. 1, 2024
Abstract
Misinformation
such
as
fake
news
and
rumors
is
a
serious
threat
for
information
ecosystems
public
trust.
The
emergence
of
large
language
models
(LLMs)
has
great
potential
to
reshape
the
landscape
combating
misinformation.
Generally,
LLMs
can
be
double‐edged
sword
in
fight.
On
one
hand,
bring
promising
opportunities
misinformation
due
their
profound
world
knowledge
strong
reasoning
abilities.
Thus,
emerging
question
is:
we
utilize
combat
misinformation?
other
critical
challenge
that
easily
leveraged
generate
deceptive
at
scale.
Then,
another
important
how
LLM‐generated
In
this
paper,
first
systematically
review
history
before
advent
LLMs.
Then
illustrate
current
efforts
present
an
outlook
these
two
fundamental
questions,
respectively.
goal
survey
paper
facilitate
progress
utilizing
fighting
call
interdisciplinary
from
different
stakeholders
Proceedings of the ACM on Human-Computer Interaction,
Journal Year:
2025,
Volume and Issue:
9(1), P. 1 - 30
Published: Jan. 10, 2025
Misinformation
on
private
messaging
platforms
like
WhatsApp
and
LINE
is
a
global
concern.
However,
research
has
primarily
focused
combating
misinformation
public
social
media.
in
difficult
to
challenge
due
norms,
interpersonal
relationships,
technological
affordances.
This
study
investigates
Auntie
Meiyu,
fact-checking
chatbot
integrated
into
LINE,
popular
service
Taiwan.
We
interviewed
27
users
who
adopted
Meiyu
their
groups
understand
motivations
perceptions
of
the
chatbot,
assess
its
influence
interactions.
Participants
indicated
that
they
protect
close
family
members
from
misleading
news.
Nevertheless,
experienced
mixed
feelings
chatbot's
robotic
style
errors
detecting
misinformation.
conclude
conversational
agents
present
promising
approach
for
tackling
misinformation,
particularly
when
participants
disagree,
offer
design
recommendations
leveraging
AI-enabled
countering
Low
uptake
of
the
COVID-19
vaccine
in
US
has
been
widely
attributed
to
social
media
misinformation.
To
evaluate
this
claim,
we
introduce
a
framework
combining
lab
experiments
(total
N=18,725),
crowdsourcing,
and
machine
learning
estimate
causal
effect
13,206
vaccine-related
URLs
on
vaccination
intentions
Facebook
users
(N≈233
million).
We
impact
misinformation
flagged
by
fact-checkers
was
46X
less
than
that
unflagged
content
nonetheless
encouraged
skepticism.
Although
reduced
significantly
more
when
viewed,
content’s
exposure
limited.
In
contrast,
stories
highlighting
rare
deaths
following
were
among
Facebook’s
most-viewed
stories.
Our
work
emphasizes
need
scrutinize
factually
accurate
but
potentially
misleading
addition
outright
falsehoods.
Information,
Journal Year:
2025,
Volume and Issue:
16(1), P. 41 - 41
Published: Jan. 13, 2025
As
our
society
increasingly
relies
on
digital
platforms
for
information,
the
spread
of
fake
news
has
become
a
pressing
concern.
This
study
investigates
ability
Greek
and
Portuguese
Instagram
users
to
identify
news,
highlighting
influence
cultural
differences.
The
responses
220
were
collected
through
questionnaires
in
Greece
Portugal.
data
analysis
characteristics
posts,
social
endorsement,
platform
usage
duration.
results
reveal
distinct
user
behaviors:
Greeks
exhibit
unique
inclination
towards
connections,
displaying
an
increased
trust
friends’
content
investing
more
time
Instagram,
reflecting
importance
personal
connections
their
media
consumption.
They
also
give
less
certain
post’s
characteristics,
such
as
opposing
beliefs,
emotional
language,
poor
grammar,
spelling,
or
formatting
when
identifying
compared
Portuguese,
suggesting
weaker
emphasis
quality
evaluations.
These
findings
show
that
differences
affect
how
people
behave
Instagram.
Hence,
creators,
platforms,
policymakers
need
specific
plans
make
online
spaces
informative.
Strategies
should
focus
enhancing
awareness
key
indicators
linguistic
post
structure,
while
addressing
role
networks
misinformation.
Proceedings of the ACM on Human-Computer Interaction,
Journal Year:
2025,
Volume and Issue:
9(2), P. 1 - 44
Published: May 2, 2025
The
ongoing
challenge
of
misinformation
on
social
media
motivates
efforts
to
find
effective
countermeasures.
In
this
study,
we
evaluated
the
potential
personalised
nudging
reduce
sharing
media,
as
support
has
been
successfully
applied
in
other
areas
critical
information
handling.
an
online
experiment
(N
=
396)
exposing
users
posts,
assessed
degree
between
groups
receiving
(1)
no
nudges,
(2)
non-personalised
and
(3)
nudges.
Personalisation
was
based
three
psychometric
dimensions
-
general
decision-making
style,
consideration
future
consequences,
need
for
cognition
assign
most
appropriate
nudge
from
a
pool
five
results
showed
significant
differences
(p
<
.05)
all
groups,
with
group
least
misinformation.
Detailed
analyses
at
level
revealed
that
one
universally
two
nudges
were
only
their
form.
generally
confirm
personalisation,
although
effect
is
limited
scope.
These
findings
shed
light
nuanced
studies,
highlight
benefits
raise
ethical
considerations
regarding
privacy
implications
personalisation
those
inherent