Perceived legitimacy of layperson and expert content moderators DOI Open Access
Cameron Martel, Adam J. Berinsky, David G. Rand

et al.

Published: June 23, 2024

Content moderation is a critical aspect of platform governance on social media and particular relevance to addressing the belief in spread misinformation. However, current content practices have been criticized as unjust. This raises an important question – who do Americans want deciding whether online harmfully misleading? We conducted nationally representative conjoint survey experiment (N=3,000) which U.S. participants evaluated legitimacy hypothetical juries tasked with evaluating was misleading. These varied they were described consisting experts (e.g., domain experts), laypeople users), or non-juries computer algorithm). also randomized features jury composition (size, necessary qualifications) engaged discussion during evaluation. Overall, expert more legitimate than layperson algorithm. modifying helped increase perceptions politically balanced enhanced legitimacy, did increased size, individual juror knowledge qualifications, enabling discussion., Maximally comparably panels. Republicans perceived less compared Democrats, but still baseline juries. Conversely, larger lay news qualifications across political spectrum. Our findings shed light foundations procedural implications for design systems.

Language: Английский

Misinformation warning labels are widely effective: A review of warning effects and their moderating features DOI
Cameron Martel, David G. Rand

Current Opinion in Psychology, Journal Year: 2023, Volume and Issue: 54, P. 101710 - 101710

Published: Oct. 19, 2023

Language: Английский

Citations

42

On the Efficacy of Accuracy Prompts Across Partisan Lines: An Adversarial Collaboration DOI
Cameron Martel, Steve Rathje, Connie J. Clark

et al.

Psychological Science, Journal Year: 2024, Volume and Issue: 35(4), P. 435 - 450

Published: March 20, 2024

The spread of misinformation is a pressing societal challenge. Prior work shows that shifting attention to accuracy increases the quality people’s news-sharing decisions. However, researchers disagree on whether accuracy-prompt interventions for U.S. Republicans/conservatives and partisanship moderates effect. In this preregistered adversarial collaboration, we tested question using multiverse meta-analysis ( k = 21; N 27,828). all 70 models, prompts improved sharing discernment among Republicans/conservatives. We observed significant partisan moderation single-headline “evaluation” treatments (a critical test one research team) such effect was stronger Democrats than Republicans. not consistently robust across different operationalizations ideology/partisanship, exclusion criteria, or treatment type. Overall, in 50% specifications (all which were considered other team). discuss conditions under offer interpretations.

Language: Английский

Citations

7

Combating misinformation: A megastudy of nine interventions designed to reduce the sharing of and belief in false and misleading headlines DOI Open Access
Lisa K. Fazio, David G. Rand, Stephan Lewandowsky

et al.

Published: June 23, 2024

Researchers have tested a variety of interventions to combat misinformation on social media (e.g., accuracy nudges, digital literacy tips, inoculation, debunking). These work via different psychological mechanisms, but all share the goals increasing recipients’ ability distinguish between true and false information and/or veracity news shared media. The current megastudy with 33,233 US-based participants tests nine prominent in an identical setting using true, false, misleading health political headlines. We find that wide can improve discernment versus or during sharing judgments. Reducing belief is goal accomplishable through multiple strategies targeting mechanisms.

Language: Английский

Citations

7

Countering misinformation through psychological inoculation DOI
Sander van der Linden

Advances in experimental social psychology, Journal Year: 2023, Volume and Issue: unknown, P. 1 - 58

Published: Dec. 19, 2023

Language: Английский

Citations

12

Best practices for source-based research on misinformation and news trustworthiness using NewsGuard DOI Creative Commons
Jula Lühring, H. Metzler,

Ruggero Marino Lazzaroni

et al.

Journal of Quantitative Description Digital Media, Journal Year: 2025, Volume and Issue: 5

Published: Jan. 14, 2025

Researchers need reliable and valid tools to identify cases of untrustworthy information when studying the spread misinformation on digital platforms. A common approach is assess trustworthiness sources rather than individual pieces content. One most widely used comprehensive databases for source ratings provided by NewsGuard. Since creating database in 2019, NewsGuard has continually added new reassessed existing ones. While initially focused only US, expanded include from other countries. In addition ratings, contains various contextual assessments sources, which are less often contemporary research misinformation. this work, we provide an analysis content database, focusing temporal stability completeness its across countries, as well usefulness political orientation topics studies. We find that coverage have remained relatively stable since 2022, particularly France, Italy, Germany, Canada, with US-based consistently scoring lower those Additional covered provides valuable assets characterizing beyond trustworthiness. By evaluating over time potential pitfalls compromise validity using a tool quantifying information, if dichotomous "trustworthy"/"untrustworthy" labels used. Lastly, recommendations media how avoid these discuss appropriate use source-level approaches general.

Language: Английский

Citations

0

Unbundling Digital Media Literacy Tips: Results from Two Experiments DOI Open Access
Andrew M. Guess, Shannon C. McGregor, Gordon Pennycook

et al.

Published: March 26, 2024

Recent studies have found promising evidence that lightweight, scalable tips promoting digital media literacy can improve the overall accuracy of social users’ sharing intentions and their ability to determine true versus false headlines. However, existing research is designed test entire bundles such tips, which limits our practical knowledge about whether some kinds are more effective than others hinders theorize mechanisms. We address this limitation by designing experiments in we randomly assign participants receive one or 10 possible (or none, a pure control group) then indicate extent they either believe would share series posts. find assignment nearly any improves sharing, but only drawing attention posts’ source improved discernment (because was highly diagnostic stimulus set). Sharing intent appears be malleable belief, consistent with idea fickle processes like play an important role driving behavior.

Language: Английский

Citations

3

Misinformation warning labels are widely effective: A review of warning effects and their moderating features DOI Open Access
Cameron Martel, David G. Rand

Published: Oct. 14, 2023

There is growing concern over the spread of misinformation online. One widely adopted intervention by platforms for addressing falsehoods applying ‘warning labels’ to posts deemed inaccurate fact-checkers. Despite a rich literature on correcting after exposure, much less work has examined effectiveness warning labels presented concurrent with exposure. Promisingly, existing research suggests that effectively reduce belief and misinformation. The size these beneficial effects depends how are implemented characteristics content being labeled. some individual differences, recent evidence indicates generally effective across party lines other demographic characteristics. We discuss potential implications limitations labelling policies online

Language: Английский

Citations

4

Accuracy prompts protect professional content moderators from the illusory truth effect DOI Open Access
Hause Lin, Marlyn Thomas Savio, Xieyining Huang

et al.

Published: March 3, 2024

Content moderators review problematic content for technology companies. One concern about this critical job is that repeated exposure to false claims could cause come believe the very they are supposed moderate, via “illusory truth effect.” In a first field experiment with global moderation company (N = 199), we found while working as did indeed increase subsequent belief among (mostly Indian and Philippine) employees. We then tested an intervention mitigate effect: inducing accuracy mindset. both general population samples (N_India 997; N_Philippines 1184) second professional 239), replicate illusory effect in control condition, find participants consider when exposed eliminates any of on falsehoods. These results show protective power mindset generalize non-Western populations moderators. highlight importance interventions ensuring healthy internet everyone.

Language: Английский

Citations

1

Misinformation is more than "fake news": Using co-sharing to identify use of mainstream news for promoting misinformation narratives DOI Creative Commons
Pranav Goel, Jon Green, David Lazer

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: May 29, 2024

Abstract Most research concerning the volume and spread of misinformation on internet measures construct at source level, identifying a set specific "fake" news domains that account for relatively small share overall consumption. This source-level categorization obscures potential factually true information from mainstream sources to be useful in service false or misleading narratives — potentially far more prevalent form misinformation. Using combination text- network-analytic techniques, we find articles reliable are co-shared with (i.e. shared by users who also shared) social media significantly likely contain than same not co-shared. is consistent strategically re-purposing enhance credibility reach claims. Our frameworks broaden both empirical theoretical scope research.

Language: Английский

Citations

1

Nudge-based misinformation interventions are effective in information environments with low misinformation prevalence DOI Creative Commons
Lucy H. Butler, Toby Prike, Ullrich K. H. Ecker

et al.

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: May 20, 2024

Nudge-based misinformation interventions are presented as cheap and effective ways to reduce the spread of online. However, despite online information environments typically containing relatively low volumes misinformation, most studies testing effectiveness nudge present equal proportions true false information. As nudges can be highly context-dependent, it is imperative validate nudge-based in with more realistic misinformation. The current study (N = 1387) assessed a combined accuracy social-norm simulated social-media varying (50%, 20%, 12.5%) relative non-news-based (i.e., "social") intervention was at improving sharing discernment conditions lower providing ecologically valid support for use counter propagation on social media.

Language: Английский

Citations

1