The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who DOI Creative Commons
Michael Schlichtkrull,

Nedjma Ousidhoum,

Andreas Vlachos

et al.

Published: Jan. 1, 2023

Automated fact-checking is often presented as an epistemic tool that fact-checkers, social media consumers, and other stakeholders can use to fight misinformation. Nevertheless, few papers thoroughly discuss how. We document this by analysing 100 highly-cited papers, annotating elements related intended use, i.e., means, ends, stakeholders. find narratives leaving out some of these aspects are common, many propose inconsistent means the feasibility suggested strategies rarely has empirical backing. argue vagueness actively hinders technology from reaching its goals, it encourages overclaiming, limits criticism, prevents stakeholder feedback. Accordingly, we provide several recommendations for thinking writing about artefacts.

Language: Английский

Combining interventions to reduce the spread of viral misinformation DOI Creative Commons
Joseph B. Bak-Coleman, Ian Kennedy, Morgan Wack

et al.

Nature Human Behaviour, Journal Year: 2022, Volume and Issue: 6(10), P. 1372 - 1380

Published: June 23, 2022

Abstract Misinformation online poses a range of threats, from subverting democratic processes to undermining public health measures. Proposed solutions encouraging more selective sharing by individuals removing false content and accounts that create or promote it. Here we provide framework evaluate interventions aimed at reducing viral misinformation both in isolation when used combination. We begin deriving generative model spread, inspired research on infectious disease. By applying this large corpus (10.5 million tweets) events occurred during the 2020 US election, reveal commonly proposed are unlikely be effective isolation. However, our demonstrates combined approach can achieve substantial reduction prevalence misinformation. Our results highlight practical path forward as continues threaten vaccination efforts, equity around globe.

Language: Английский

Citations

105

Misunderstanding the harms of online misinformation DOI
Ceren Budak, Brendan Nyhan, David Rothschild

et al.

Nature, Journal Year: 2024, Volume and Issue: 630(8015), P. 45 - 53

Published: June 5, 2024

Language: Английский

Citations

28

"Learn the Facts about COVID-19": Analyzing the Use of Warning Labels on TikTok Videos DOI Open Access

Chen Ling,

Krishna P. Gummadi, Savvas Zannettou

et al.

Proceedings of the International AAAI Conference on Web and Social Media, Journal Year: 2023, Volume and Issue: 17, P. 554 - 565

Published: June 2, 2023

During the COVID-19 pandemic, health-related misinformation and harmful content shared online had a significant adverse effect on society. In an attempt to mitigate this effect, mainstream social media platforms like Facebook, Twitter, TikTok employed soft moderation interventions (i.e., warning labels) potentially posts. Such aim inform users about post's without removing it, hence easing public's concerns censorship freedom of speech. Despite recent popularity these interventions, as research community, we lack empirical analyses aiming uncover how labels are used in wild, particularly during challenging times pandemic. work, analyze use TikTok, focusing videos. First, construct set 26 related hashtags, then collect 41K videos that include those hashtags their description. Second, perform quantitative analysis entire dataset understand TikTok. Then, in-depth qualitative study, using thematic analysis, 222 assess connection between labels. Our shows broadly applies videos, likely based included description (e.g., 99% contain #coronavirus have labels). More worrying is addition where actual not (23% cases sample 143 English COVID-19). Finally, our 7.7% share misinformation/harmful do labels, 37.3% benign information 35% (and need label) made for fun. study demonstrates develop more accurate precise systems, especially platform extremely popular among people younger age.

Language: Английский

Citations

22

Research note: Examining how various social media platforms have responded to COVID-19 misinformation DOI Creative Commons
Nandita Krishnan, Jiayan Gu, Rebekah Tromble

et al.

Published: Dec. 15, 2021

We analyzed community guidelines and official news releases blog posts from 12 leading social media messaging platforms (SMPs) to examine their responses COVID-19 misinformation. While the majority of stated that they prohibited misinformation, many lacked clarity transparency. Facebook, Instagram, YouTube, Twitter had largely consistent responses, but other varied with regard types content prohibited, criteria guiding remedies developed address Only YouTube described systems for applying various remedies. These differences highlight need establish general standards across misinformation more cohesively.

Language: Английский

Citations

40

Identifying cross-platform user relationships in 2020 U.S. election fraud and protest discussions DOI Creative Commons
Isabel Murdock, Kathleen M. Carley, Osman Yağan

et al.

Online Social Networks and Media, Journal Year: 2023, Volume and Issue: 33, P. 100245 - 100245

Published: Jan. 1, 2023

Understanding how social media users interact with each other and spread information across multiple platforms is critical for developing effective methods promoting truthful disrupting misinformation, as well accurately simulating multi-platform diffusion. This work explores five approaches identifying relationships between involved in cross-platform spread. We use a combination of user attributes URL posting behaviors to find who appear purposely the same over or transfer new platforms. To evaluate outlined approaches, we apply them dataset 24M posts from Twitter, Facebook, Reddit, Instagram relating 2020 U.S. presidential election. then characterize validate our results using null model analysis component structure networks returned by approach. subsequently examine political bias, fact ratings, performance content posted identified sets users. that different yield largely distinct biases preferences.

Language: Английский

Citations

13

Falling for Russian Propaganda: Understanding the Factors that Contribute to Belief in Pro-Kremlin Disinformation on Social Media DOI Creative Commons
Felipe Bonow Soares, Anatoliy Gruzd, Philip Mai

et al.

Social Media + Society, Journal Year: 2023, Volume and Issue: 9(4)

Published: Oct. 1, 2023

As Russia launched its full-scale invasion of Ukraine in February 2022, social media was rife with pro-Kremlin disinformation. To effectively tackle the issue state-sponsored disinformation campaigns, this study examines underlying reasons why some individuals are susceptible to false claims and explores ways reduce their susceptibility. It uses linear regression analysis on data from a national survey 1,500 adults (18+) examine factors that predict belief narratives regarding Russia–Ukraine war. Our research finds Pro-Kremlin is politically motivated linked users who: (1) hold conservative views, (2) trust partisan media, (3) frequently share political opinions media. findings also show exposure positively associated Conversely, mainstream negatively disinformation, offering potential way mitigate impact.

Language: Английский

Citations

12

A large-scale sentiment analysis of tweets pertaining to the 2020 US presidential election DOI Creative Commons
Rao Hamza Ali, Gabriela Pinto,

Evelyn Lawrie

et al.

Journal Of Big Data, Journal Year: 2022, Volume and Issue: 9(1)

Published: June 16, 2022

Abstract We capture the public sentiment towards candidates in 2020 US Presidential Elections, by analyzing 7.6 million tweets sent out between October 31st and November 9th, 2020. apply a novel approach to first identify user accounts our database that were later deleted or suspended from Twitter. This allows us observe held for each presidential candidate across various groups of users tweets: accessible accounts, inaccessible accounts. compare scores calculated these provide key insights into differences. Most notably, we show tweets, posted after Election Day, more favorable Joe Biden, ones leading positive about Donald Trump. Also, older Twitter account was, it would post Biden. The aim this study is highlight importance conducting analysis on all posts captured real time, including those are now inaccessible, determining true sentiments opinions around time an event.

Language: Английский

Citations

18

Online conspiracy communities are more resilient to deplatforming DOI Creative Commons
Corrado Monti, Matteo Cinelli, Carlo Michele Valensise

et al.

PNAS Nexus, Journal Year: 2023, Volume and Issue: 2(10)

Published: Sept. 29, 2023

Abstract Online social media foster the creation of active communities around shared narratives. Such may turn into incubators for conspiracy theories—some spreading violent messages that could sharpen debate and potentially harm society. To face these phenomena, most platforms implemented moderation policies, ranging from posting warning labels up to deplatforming, i.e. permanently banning users. Assessing effectiveness content is crucial balancing societal safety while preserving right free speech. In this article, we compare shift in behavior users affected by ban two large on Reddit, GreatAwakening FatPeopleHate, which were dedicated QAnon body-shaming individuals, respectively. Following ban, both partially migrated Voat, an unmoderated Reddit clone. We estimate how many migrate, finding community are much more likely leave altogether join Voat. Then, quantify behavioral within across Voat matching common While general activity lower new platform, who decided completely maintain a similar level Toxicity strongly increases communities. Finally, migrating tend recreate their previous network Our findings suggest hosting should be carefully designed, as resilient deplatforming.

Language: Английский

Citations

11

The Electoral Misinformation Nexus: How News Consumption, Platform Use, and Trust in News Influence Belief in Electoral Misinformation DOI Creative Commons

Camila Mont’Alverne,

Amy Ross Arguedas,

Sayan Banerjee

et al.

Public Opinion Quarterly, Journal Year: 2024, Volume and Issue: 88(SI), P. 681 - 707

Published: Jan. 1, 2024

Abstract Electoral misinformation, where citizens believe false or misleading claims about the electoral process and institutions—sometimes actively strategically spread by political actors—is a challenge to public confidence in elections specifically democracy more broadly. In this article, we analyze combination of 42 million clicks links apps from behavioral tracking data 2,200 internet users four-wave panel survey investigate how different kinds online news media use relate beliefs misinformation during contentious period—the 2022 Brazilian presidential elections. We find that, controlling for other factors, using legacy is associated with belief fewer over time. null inconsistent effects digital-born various digital platforms, including Facebook WhatsApp. Furthermore, that trust plays significant role as moderator. Belief turn, undermines news. Overall, our findings document important an institution curbing even they also underline precarity periods.

Language: Английский

Citations

4

Who knowingly shares false political information online? DOI Creative Commons
Shane Littrell, Casey Klofstad, Amanda B. Diekman

et al.

Published: Aug. 24, 2023

Some people share misinformation accidentally, but others do so knowingly. To fully understand the spread of online, it is important to analyze those who purposely it. Using a 2022 U.S. survey, we found that 14 percent respondents reported knowingly sharing misinformation, and these were more likely also report support for political violence, desire run office, warm feelings toward extremists. These have elevated levels psychological need chaos, dark tetrad traits, paranoia. Our findings illuminate one vector through which spread.

Language: Английский

Citations

9