Vaccines,
Journal Year:
2024,
Volume and Issue:
12(7), P. 728 - 728
Published: June 29, 2024
Background:
This
cross-sectional
survey
aimed
to
explore
the
reasons
for
receiving
HPV
vaccination
among
eligible
adults
in
Italy.
Methods:
The
was
conducted
from
July
2023
April
2024
Naples,
Southern
Results:
A
total
of
282
questionnaires
were
collected.
majority
respondents
(73.2%)
aware
that
recommended
and
this
more
likely
women,
healthcare
workers
(HCWs)
or
students
health
sciences,
those
who
had
acquired
information
physicians.
most
frequently
cited
vaccinating
self-protection
infection
(77.6%)
cervical/oral/penile/anal
cancer
(68.9%),
knowing
free
charge
(46.2%),
awareness
severity
disease
(43%),
protect
their
partner
(42.6%),
perception
being
at
risk
(24.2%).
Being
HCWs
believing
could
cause
a
serious
disease,
having
higher
number
oral
intercourse
experiences
last
year
significant
predictors
risk.
Female
Italian
receive
because
it
effective
preventing
cancer.
Conclusions:
Targeted
educational
programs
interventions
should
be
developed
ensure
enhancing
knowledge
fostering
positive
attitudes
toward
vaccination.
Mainstream
media,
through
their
decisions
on
what
to
cover
and
how
frame
the
stories
they
cover,
can
mislead
readers
without
using
outright
falsehoods.
Therefore,
it
is
crucial
have
tools
that
expose
these
editorial
choices
underlying
media
bias.
In
this
paper,
we
introduce
Media
Bias
Detector,
a
tool
for
researchers,
journalists,
news
consumers.
By
integrating
large
language
models,
provide
near
real-time
granular
insights
into
topics,
tone,
political
lean,
facts
of
articles
aggregated
publisher
level.
We
assessed
tool's
impact
by
interviewing
13
experts
from
journalism,
communications,
science,
revealing
key
usability
functionality,
practical
applications,
AI's
role
in
powering
bias
tools.
explored
more
depth
with
follow-up
survey
150
This
work
highlights
opportunities
AI-driven
empower
users
critically
engage
content,
particularly
politically
charged
environments.
PNAS Nexus,
Journal Year:
2025,
Volume and Issue:
4(2)
Published: Feb. 1, 2025
Abstract
Misinformation
disrupts
our
information
ecosystem,
adversely
affecting
individuals
and
straining
social
cohesion
democracy.
Understanding
what
causes
online
(mis)information
to
(re)appear
is
crucial
for
fortifying
ecosystem.
We
analyzed
a
large-scale
Twitter
(now
“X”)
dataset
of
about
2
million
tweets
across
123
fact-checked
stories.
Previous
research
suggested
falsehood
effect
(false
reappears
more
frequently)
an
ambiguity
(ambiguous
frequently).
However,
robust
indicators
their
existence
remain
elusive.
Using
polynomial
statistical
modeling,
we
compared
model,
dual
model.
The
data
supported
the
model
(13.76
times
as
likely
null
model),
indicating
both
promote
reappearance.
evidence
was
stronger:
6.6
Various
control
checks
affirmed
effect,
while
less
stable.
Nonetheless,
best-fitting
explained
<7%
variance,
that
(i)
dynamics
are
complex
(ii)
effects
may
play
smaller
role
than
previous
has
suggested.
These
findings
underscore
importance
understanding
(mis)information,
though
focus
on
stories
limit
generalizability
full
spectrum
shared
online.
Even
so,
results
can
inform
policymakers,
journalists,
media
platforms,
public
in
building
resilient
environment,
also
opening
new
avenues
research,
including
source
credibility,
cross-platform
applicability,
psychological
factors.
Deleted Journal,
Journal Year:
2025,
Volume and Issue:
2(1)
Published: April 2, 2025
Abstract
Understanding
how
misinformation
affects
the
spread
of
disease
is
crucial
for
public
health,
especially
given
recent
research
indicating
that
can
increase
vaccine
hesitancy
and
discourage
uptake.
However,
it
difficult
to
investigate
interaction
between
epidemic
outcomes
due
dearth
data-informed
holistic
models.
Here,
we
employ
an
model
incorporates
a
large,
mobility-informed
physical
contact
network
as
well
distribution
misinformed
individuals
across
counties
derived
from
social
media
data.
The
allows
us
simulate
various
scenarios
understand
spreading
be
affected
by
through
one
particular
platform.
Using
this
model,
compare
worst-case
scenario,
in
which
become
after
single
exposure
low-credibility
content,
best-case
scenario
where
population
highly
resilient
misinformation.
We
estimate
additional
portion
U.S.
would
infected
over
course
COVID-19
scenario.
This
work
provide
policymakers
with
insights
about
potential
harms
online
PNAS Nexus,
Journal Year:
2025,
Volume and Issue:
4(5)
Published: April 30, 2025
Content
moderation
is
a
critical
aspect
of
platform
governance
on
social
media
and
particular
relevance
to
addressing
the
belief
in
spread
misinformation.
However,
current
content
practices
have
been
criticized
as
unjust.
This
raises
an
important
question-who
do
Americans
want
deciding
whether
online
harmfully
misleading?
We
conducted
nationally
representative
survey
experiment
(n
=
3,000)
which
US
participants
evaluated
legitimacy
hypothetical
juries
tasked
with
evaluating
was
misleading.
These
varied
they
were
described
consisting
experts
(e.g.
domain
experts),
laypeople
users),
or
nonjuries
computer
algorithm).
also
randomized
features
jury
composition
(size
necessary
qualifications)
engaged
discussion
during
evaluation.
Overall,
expert
more
legitimate
than
layperson
algorithm.
modifying
helped
increase
perceptions-nationally
politically
balanced
enhanced
legitimacy,
did
increased
size,
individual
juror
knowledge
qualifications,
enabling
discussion.
Maximally
comparably
panels.
Republicans
perceived
less
compared
Democrats,
but
still
baseline
juries.
Conversely,
larger
lay
news
qualifications
who
across
political
spectrum.
Our
findings
shed
light
foundations
institutional
implications
for
design
systems.