Journal of Medical Internet Research,
Год журнала:
2023,
Номер
25, С. e49416 - e49416
Опубликована: Сен. 22, 2023
While
there
has
been
substantial
analysis
of
social
media
content
deemed
to
spread
misinformation
about
electronic
nicotine
delivery
systems
use,
the
strategic
use
accusations
undermine
opposing
views
received
limited
attention.
Low
uptake
of
the
COVID-19
vaccine
in
US
has
been
widely
attributed
to
social
media
misinformation.
To
evaluate
this
claim,
we
introduce
a
framework
combining
lab
experiments
(total
N
=
18,725),
crowdsourcing,
and
machine
learning
estimate
causal
effect
13,206
vaccine-related
URLs
on
vaccination
intentions
Facebook
users
(
≈
233
million).
We
that
impact
unflagged
content
nonetheless
encouraged
skepticism
was
46-fold
greater
than
misinformation
flagged
by
fact-checkers.
Although
reduced
predicted
significantly
more
when
viewed,
users’
exposure
limited.
In
contrast,
stories
highlighting
rare
deaths
after
were
among
Facebook’s
most-viewed
stories.
Our
work
emphasizes
need
scrutinize
factually
accurate
but
potentially
misleading
addition
outright
falsehoods.
Nature Medicine,
Год журнала:
2024,
Номер
30(6), С. 1559 - 1563
Опубликована: Апрель 29, 2024
Abstract
It
is
unclear
how
great
a
challenge
pandemic
and
vaccine
fatigue
present
to
public
health.
We
assessed
perspectives
on
coronavirus
disease
2019
(COVID-19)
routine
immunization
as
well
trust
in
information
sources
future
preparedness
survey
of
23,000
adults
23
countries
October
2023.
The
participants
reported
lower
intent
get
COVID-19
booster
2023
(71.6%),
compared
with
2022
(87.9%).
A
total
60.8%
expressed
being
more
willing
vaccinated
for
diseases
other
than
result
their
experience
during
the
pandemic,
while
23.1%
less
willing.
Trust
11
selected
each
averaged
7
10-point
scale
one’s
own
doctor
or
nurse
World
Health
Organization,
averaging
6.9
6.5,
respectively.
Our
findings
emphasize
that
hesitancy
challenges
remain
health
practitioners,
underscoring
need
targeted,
culturally
sensitive
communication
strategies.
Natural Language Processing Journal,
Год журнала:
2024,
Номер
6, С. 100053 - 100053
Опубликована: Янв. 5, 2024
Spreading
misinformation
and
fake
news
about
COVID-19
has
become
a
critical
concern.
It
contributes
to
lack
of
trust
in
public
health
authorities,
hinders
actions
from
controlling
the
virus's
spread,
risks
people's
lives.
This
study
aims
gain
insights
into
types
spread
develop
an
in-depth
analytical
approach
for
analyzing
news.
combines
idea
Sentiment
Analysis
(SA)
Topic
Modeling
(TM)
improve
accuracy
topic
extraction
large
volume
unstructured
texts
by
considering
sentiment
words.
A
dataset
containing
10,254
headlines
various
sources
was
collected
prepared,
rule-based
SA
applied
label
with
three
tags.
Among
TM
models
evaluated,
Latent
Dirichlet
Allocation
(LDA)
demonstrated
highest
coherence
score
0.66
20
coherent
negative
sentiment-based
topics
0.573
18
positive
topics,
outperforming
Non-negative
Matrix
Factorization
(NMF)
(coherence:
0.43)
Semantic
(LSA)
0.40).
The
extracted
experiments
highlight
that
primarily
revolves
around
COVID
vaccine,
crime,
quarantine,
medicine,
political
social
aspects.
research
offers
insight
effects
news,
provides
valuable
method
detecting
misinformation,
emphasizes
importance
understanding
patterns
themes
protecting
promoting
scientific
accuracy.
Moreover,
it
can
aid
developing
real-time
monitoring
systems
combat
extending
beyond
COVID-19-related
enhancing
applicability
findings.
PLoS ONE,
Год журнала:
2024,
Номер
19(5), С. e0302201 - e0302201
Опубликована: Май 22, 2024
The
world’s
digital
information
ecosystem
continues
to
struggle
with
the
spread
of
misinformation.
Prior
work
has
suggested
that
users
who
consistently
disseminate
a
disproportionate
amount
low-credibility
content—so-called
superspreaders—are
at
center
this
problem.
We
quantitatively
confirm
hypothesis
and
introduce
simple
metrics
predict
top
superspreaders
several
months
into
future.
then
conduct
qualitative
review
characterize
most
prolific
analyze
their
sharing
behaviors.
Superspreaders
include
pundits
large
followings,
media
outlets,
personal
accounts
affiliated
those
range
influencers.
They
are
primarily
political
in
nature
use
more
toxic
language
than
typical
user
also
find
concerning
evidence
suggests
Twitter
may
be
overlooking
prominent
superspreaders.
hope
will
further
public
understanding
bad
actors
promote
steps
mitigate
negative
impacts
on
healthy
discourse.
Erciyes İletişim Dergisi,
Год журнала:
2025,
Номер
12(1), С. 159 - 186
Опубликована: Янв. 30, 2025
Dezenformasyon,
kitle
iletişimindeki
en
önemli
sorunlardan
biri
olma
özelliğini
sürdürmektedir.
Bu
sorun
özellikle
olağanüstü
dönemlerde
ciddi
seviyelere
ulaşıp
tehlikeli
bir
hal
almaktadır.
Türkiye’de
11
ilde
yıkıcı
etki
yapan
Kahramanmaraş
depreminde
de
dezenformasyonun
büyük
zararları
ortaya
çıkmıştır.
Depremin
hemen
ardından
yoğun
dezenformasyon
başladığı
için,
Dezenformasyonla
Mücadele
Merkezi
ilk
ay
bültenlerinin
yüzde
93’ünü
deprem
konusundaki
hatalı
içerikleri
düzeltmeye
ayırmıştır.
çalışma
da
dezenformasyonla
mücadele
etmek
için
yapılan
girişimlerin
ne
kadar
başarılı
olduğu,
sorunsalından
hareketle
gerçekleştirilmiştir.
Çalışmanın
amacı
çalışmalarının
verimliliğini
koyarak
alınabilecek
yeni
tedbirler
perspektif
oluşturmaktır.
Çalışmada,
Dezenformasyon
Bültenleri
doküman
analizi
ile
incelenmiş;
X
platformundaki
gönderiler
açık
içerik
değerlendirilmiş;
içeriği
ve
motivasyonu
ise
mesaj
çözümlemesi
yöntemiyle
analiz
edilmiştir.
Çalışma
sosyal
medyanın
dezenformasyondaki
ısrarını
istatiksel
anlamda
koyması,
dezenformasyonunun
motivasyonunu
geniş
alana
yayılmasını
göstermesi
bakımından
önemlidir.
Çalışmada
örneklem
olarak
seçilen
39
dezenformasyonun,
45
milyon
808
bin
görüntülendiği,
içeriklerin
87
oranında
düzeltilmediği
ya
silinmediği
çoğunlukla
siyasi
motivasyonlarla
üretildiği
görülmüştür.
Sosyal
yanlış
bilgiyi
düzeltmeme
sorunu
kez
daha
gözler
önüne
serilmiştir.
neticesinde
çarpıtılmış
bilgilerin
düzeltilmesi
farklı
stratejilere
ihtiyaç
duyulduğu
anlaşılmıştır.
konudaki
mevcut
öneriler
sıralandıktan
sonra
sorunun
çözümü
adımlarla
ilgili
tartışma
yapılmıştır.
Proceedings of the International AAAI Conference on Web and Social Media,
Год журнала:
2023,
Номер
17, С. 890 - 901
Опубликована: Июнь 2, 2023
Social
media
provide
a
fertile
ground
where
conspiracy
theories
and
radical
ideas
can
flourish,
reach
broad
audiences,
sometimes
lead
to
hate
or
violence
beyond
the
online
world
itself.
QAnon
represents
notable
example
of
political
that
started
out
on
social
but
turned
mainstream,
in
part
due
public
endorsement
by
influential
figures.
Nowadays,
conspiracies
often
appear
news,
are
rhetoric,
espoused
significant
swaths
people
United
States.
It
is
therefore
crucial
understand
how
such
took
root
online,
what
led
so
many
users
adopt
its
ideas.
In
this
work,
we
propose
framework
exploits
both
interaction
content
signals
uncover
evidence
user
radicalization
support
for
QAnon.
Leveraging
large
dataset
240M
tweets
collected
run-up
2020
US
Presidential
election,
define
validate
multivariate
metric
radicalization.
We
use
separate
distinct,
naturally-emerging,
classes
behaviors
associated
with
processes,
from
self-declared
supporters
hyper-active
promoters.
also
analyze
impact
Twitter's
moderation
policies
interactions
among
different
classes:
discover
aspects
succeed,
yielding
substantial
reduction
received
hyperactive
accounts.
But
fails,
showing
amplifiers
not
deterred
affected
Twitter
intervention.
Our
findings
refine
our
understanding
reveal
effective
ineffective
moderation,
call
need
further
investigate
role
play
spread
conspiracies.
Low
uptake
of
the
COVID-19
vaccine
in
US
has
been
widely
attributed
to
social
media
misinformation.
To
evaluate
this
claim,
we
introduce
a
framework
combining
lab
experiments
(total
N=18,725),
crowdsourcing,
and
machine
learning
estimate
causal
effect
13,206
vaccine-related
URLs
on
vaccination
intentions
Facebook
users
(N≈233
million).
We
impact
misinformation
flagged
by
fact-checkers
was
46X
less
than
that
unflagged
content
nonetheless
encouraged
skepticism.
Although
reduced
significantly
more
when
viewed,
content’s
exposure
limited.
In
contrast,
stories
highlighting
rare
deaths
following
were
among
Facebook’s
most-viewed
stories.
Our
work
emphasizes
need
scrutinize
factually
accurate
but
potentially
misleading
addition
outright
falsehoods.
Abstract
The
detection
of
state-sponsored
trolls
operating
in
influence
campaigns
on
social
media
is
a
critical
and
unsolved
challenge
for
the
research
community,
which
has
significant
implications
beyond
online
realm.
To
address
this
challenge,
we
propose
new
AI-based
solution
that
identifies
troll
accounts
solely
through
behavioral
cues
associated
with
their
sequences
sharing
activity,
encompassing
both
actions
feedback
they
receive
from
others.
Our
approach
does
not
incorporate
any
textual
content
shared
consists
two
steps:
First,
leverage
an
LSTM-based
classifier
to
determine
whether
account
belong
or
organic,
legitimate
user.
Second,
employ
classified
calculate
metric
named
“Troll
Score”,
quantifying
degree
exhibits
troll-like
behavior.
assess
effectiveness
our
method,
examine
its
performance
context
2016
Russian
interference
campaign
during
U.S.
Presidential
election.
experiments
yield
compelling
results,
demonstrating
can
identify
AUC
close
99%
accurately
differentiate
between
organic
users
91%.
Notably,
behavioral-based
holds
advantage
ever-evolving
landscape,
where
linguistic
properties
be
easily
mimicked
by
Large
Language
Models
(LLMs):
In
contrast
existing
language-based
techniques,
it
relies
more
challenging-to-replicate
cues,
ensuring
greater
resilience
identifying
campaigns,
especially
given
potential
increase
usage
LLMs
generating
inauthentic
content.
Finally,
assessed
generalizability
various
entities
driving
different
information
operations
found
promising
results
will
guide
future
research.
Akademik Yaklaşımlar Dergisi,
Год журнала:
2024,
Номер
15(1 -Deprem Özel Sayısı-), С. 411 - 429
Опубликована: Янв. 20, 2024
Doğal
afetler
insan
müdahalesi
olmadan,
beklenmeyen
bir
zamanda
gerçekleşen
ve
yıkıcı
sonuçlara
sahip
olabilen
doğa
olaylarıdır.
Afetlerin
doğal
olarak
kaotik
süreci
vardır
bu
nedenden
yönetilmesi
oldukça
güçtür.
Afetzedeler
ile
doğru
iletişim
hızlı
karar
verme
afet
sonucundaki
olumsuzlukları
azaltabilir.
Günümüzde
güçlü
aracı
sıklıkla
kullanılan
sosyal
medya,
yönetiminde
kullanımı
son
derece
önemlidir.
Ancak
medya
belirli
kontrol
mekanizması
anonim
ortamlardır.
Yazılan
her
paylaşım
olmayabilir
hatta
art
niyetli
olabilmektedir.
Bu
çalışmada
sonrası
oluşturulan
depremle
ilgili
yapan
hesapların
yaptığı
paylaşımlar
üzerinden
analiz
gerçekleştirilmiştir.
Sosyal
medyanın
kullanımının
en
büyük
engellerinden
birisi
olan
hesap
güvenirliğinin
üzerine
değerlendirme
yapılmıştır.
6
Şubat
2023
Büyük
Kahramanmaraş
depreminden
sonra
ilk
7
günde
3.146
hesabın
oluşturulduğu
6.724
tane
görülmüştür.
Bugün
yapılan
kontrollerde
5
üzeri
%48’nin
platform
tarafından
askıya
alındığı
veya
kapatıldığı
Hesapların
mevcut
durumda
açık
olanlarının
ortalama
14
takipçi
kazandığı
Ayrıca
tüm
sırasıyla
“Tepki/Dilek”,
“Yardım
Talebi”
ve”
Kurtarma
kategorilerinde
yaptıkları