Social drivers and algorithmic mechanisms on digital media DOI Open Access
H. Metzler, David García

Опубликована: Дек. 5, 2022

On digital media, algorithms that process data and recommend content have become ubiquitous. Their fast barely-regulated adoption has raised concerns about their role in well-being both at the individual collective levels. Algorithmic mechanisms on media are powered by social drivers, creating a feedback loop complicates research to disentangle of already existing phenomena. Our brief overview current evidence how affect well-being, misinformation, polarization suggests these phenomena is far from straightforward substantial further empirical needed. Existing mostly reinforce which stresses importance reflecting larger societal context including individualism, populist politics, climate change. We present concrete ideas questions improve platforms investigate problems potential solutions. Finally, we discuss shift more algorithmically-curated brings risks opportunities if designed for flourishing rather than short-term profit.

Язык: Английский

Can Fighting Misinformation Have a Negative Spillover Effect? How Warnings for the Threat of Misinformation Can Decrease General News Credibility DOI Creative Commons
Toni G.L.A. van der Meer, Michael Hameleers, Jakob Ohme

и другие.

Journalism Studies, Год журнала: 2023, Номер 24(6), С. 803 - 823

Опубликована: Март 16, 2023

In the battle against misinformation, do negative spillover effects of communicative efforts intended to protect audiences from inaccurate information exist? Given relatively limited prevalence misinformation in people's news diets, this study explores if heightened salience as a persistent societal threat can have an unintended effect by decreasing credibility factually accurate news. Using experimental design (N = 1305), we test whether ratings are subject exposure corrective information, warnings, and media literacy (NML) interventions relativizing threat. Findings suggest that like warning about prime general distrust authentic news, hinting toward deception bias context fear being salient. Next, successfulness NML is not straight forward it comes avoiding distorts creditability accuracy. We conclude threats order may just be remedied fighting false but also reestablishing trust legitimate

Язык: Английский

Процитировано

61

People believe misinformation is a threat because they assume others are gullible DOI

Sacha Altay,

Alberto Acerbi

New Media & Society, Год журнала: 2023, Номер 26(11), С. 6440 - 6461

Опубликована: Фев. 17, 2023

Alarmist narratives about the flow of misinformation and its negative consequences have gained traction in recent years. If these fears are to some extent warranted, scientific literature suggests that many them exaggerated. Why people so worried misinformation? In two pre-registered surveys conducted United Kingdom ( N study_1 = 300, study_2 300) replicated States 302, 299), we investigated psychological factors associated with perceived danger how it contributes popularity alarmist on misinformation. We find strongest, most reliable, predictor is third-person effect (i.e. perception others more vulnerable than self) and, particular, belief “distant” (as opposed family friends) The societal problems simple solutions clear causes was consistently, but weakly, online Other factors, like attitudes toward new technologies higher sensitivity threats, were inconsistently, Finally, found participants who report being willing share Our findings suggest tap into our tendency view other as gullible.

Язык: Английский

Процитировано

39

A survey of expert views on misinformation: Definitions, determinants, solutions, and future of the field DOI Creative Commons

Sacha Altay,

Manon Berriche, Hendrik Heuer

и другие.

Опубликована: Июль 27, 2023

We surveyed 150 academic experts on misinformation and identified areas of expert consensus. Experts defined as false misleading information, though views diverged the importance intentionality what exactly constitutes misinformation. The most popular reason why people believe share was partisanship, while lack education one least reasons. were optimistic about effectiveness interventions against supported system-level actions misinformation, such platform design changes algorithmic changes. agreed-upon future direction for field to collect more data outside United States.

Язык: Английский

Процитировано

39

People are skeptical of headlines labeled as AI-generated, even if true or human-made, because they assume full AI automation DOI Creative Commons

Sacha Altay,

Fabrizio Gilardi

PNAS Nexus, Год журнала: 2024, Номер 3(10)

Опубликована: Окт. 1, 2024

Abstract The rise of generative AI tools has sparked debates about the labeling AI-generated content. Yet, impact such labels remains uncertain. In two preregistered online experiments among US and UK participants (N = 4,976), we show that while did not equate “AI-generated” with “False,” headlines as lowered their perceived accuracy participants’ willingness to share them, regardless whether were true or false, created by humans AI. was three times smaller than them false. This aversion is due expectations labeled have been entirely written no human supervision. These findings suggest content should be approached cautiously avoid unintended negative effects on harmless even beneficial effective deployment requires transparency regarding meaning.

Язык: Английский

Процитировано

10

Misinformation on Misinformation: Conceptual and Methodological Challenges DOI Open Access

Sacha Altay,

Manon Berriche, Alberto Acerbi

и другие.

Опубликована: Ноя. 9, 2021

Alarmist narratives about online misinformation continue to gain traction despite evidence that its prevalence and impact are overstated. Drawing on research examining the use of big data in social science reception studies, we identify six misconceptions highlight conceptual methodological challenges they raise. The first set concerns circulation misinformation. First, scientists focus media because it is methodologically convenient, but not just a problem. Second, internet rife with or news, memes entertaining content. Third, falsehoods do spread faster than truth; how define (mis)information influences our results their practical implications. second Fourth, people believe everything see internet: sheer volume engagement should be conflated belief. Fifth, more likely uninformed misinformed; surveys overestimate misperceptions say little causal influence Sixth, people’s behavior overblown as often ‘preaches choir’. To appropriately understand fight misinformation, future needs address these challenges.

Язык: Английский

Процитировано

49

Psychological interventions countering misinformation in social media: A scoping review DOI Creative Commons
Paweł Gwiaździński, Aleksander B. Gundersen, Michał Piksa

и другие.

Frontiers in Psychiatry, Год журнала: 2023, Номер 13

Опубликована: Янв. 5, 2023

The rise of social media users and the explosive growth in misinformation shared across platforms have become a serious threat to democratic discourse public health. mentioned implications increased demand for detection intervention. To contribute this challenge, we are presenting systematic scoping review psychological interventions countering media. was conducted (i) identify map evidence on misinformation, (ii) compare viability media, (iii) provide guidelines development effective interventions.A search three bibliographic databases (PubMed, Embase, Scopus) additional searches Google Scholar reference lists were conducted.3,561 records identified, 75 which met eligibility criteria inclusion final review. identified during can be classified into categories distinguished by Kozyreva et al.: Boosting, Technocognition, Nudging, then 15 types within these. Most studied not implemented tested real environment but under strictly controlled settings or online crowdsourcing platforms. presented feasibility assessment implementation insights expressed qualitatively with numerical scoring could guide future that successfully platforms.The provides basis further research counteracting misinformation. Future should aim combine Technocognition Nudging user experience services.[https://figshare.com/], identifier [https://doi.org/10.6084/m9.figshare.14649432.v2].

Язык: Английский

Процитировано

21

Social Drivers and Algorithmic Mechanisms on Digital Media DOI Creative Commons
H. Metzler, David García

Perspectives on Psychological Science, Год журнала: 2023, Номер 19(5), С. 735 - 748

Опубликована: Июль 19, 2023

On digital media, algorithms that process data and recommend content have become ubiquitous. Their fast barely regulated adoption has raised concerns about their role in well-being both at the individual collective levels. Algorithmic mechanisms on media are powered by social drivers, creating a feedback loop complicates research to disentangle of already existing phenomena. Our brief overview current evidence how affect well-being, misinformation, polarization suggests these phenomena is far from straightforward substantial further empirical needed. Existing mostly reinforce finding stresses importance reflecting larger societal context encompasses individualism, populist politics, climate change. We present concrete ideas questions improve platforms investigate problems potential solutions. Finally, we discuss shift more algorithmically curated brings risks opportunities if designed for flourishing rather than short-term profit.

Язык: Английский

Процитировано

20

Learning to evaluate sources of science (mis)information on the internet: Assessing students' scientific online reasoning DOI Creative Commons
Daniel Pimentel

Journal of Research in Science Teaching, Год журнала: 2024, Номер unknown

Опубликована: Июль 27, 2024

Abstract Students frequently turn to the internet for information about a range of scientific issues. However, they can find it challenging evaluate credibility find, which may increase their susceptibility mis‐ and disinformation. This exploratory study reports findings from an instructional intervention designed teach high school students engage in online reasoning (SOR), set competencies evaluating sources on internet. Forty‐three ninth grade participated eleven activities. They completed pre post constructed response tasks assess three constructs: conflicts interest, relevant expertise, alignment with consensus. A subset ( n = 6) also think‐aloud where evaluated websites varying credibility. Students' written responses screen‐capture recordings were scored, coded, analyzed using mixed‐methods approach. Findings demonstrate that after intervention: (1) students' assessment scores improved significantly all tasks, (2) ability distinguish between credibility, (3) more used strategies outside information. Areas student growth are identified, such as improving coordinated use criteria strategies. These results suggest teaching information, along strategies, has potential help encountered

Язык: Английский

Процитировано

6

Exposure to Higher Rates of False News Erodes Media Trust and Fuels Overconfidence DOI Open Access

Sacha Altay,

Benjamin Lyons, Ariana Modirrousta-Galian

и другие.

Опубликована: Апрель 5, 2023

In two online experiments (N = 2,735), we investigated whether forced exposure to high proportions of false news could have deleterious effects by sowing confusion and fueling distrust in news. a between-subjects design where U.S. participants rated the accuracy true news, manipulated headlines were exposed (17%, 33%, 50%, 66% 83%). We found that higher decreased trust but did not affect participants’ perceived headlines. While had no effect on overall ability discern between they made more overconfident their discernment ability. Therefore, may increasing belief falsehoods, overconfidence eroding Although are only able shed light one causal pathway, from environment attitudes, this can help us better understand external or supply-side changes quality.

Язык: Английский

Процитировано

12

Falling for Russian Propaganda: Understanding the Factors that Contribute to Belief in Pro-Kremlin Disinformation on Social Media DOI Creative Commons
Felipe Bonow Soares, Anatoliy Gruzd, Philip Mai

и другие.

Social Media + Society, Год журнала: 2023, Номер 9(4)

Опубликована: Окт. 1, 2023

As Russia launched its full-scale invasion of Ukraine in February 2022, social media was rife with pro-Kremlin disinformation. To effectively tackle the issue state-sponsored disinformation campaigns, this study examines underlying reasons why some individuals are susceptible to false claims and explores ways reduce their susceptibility. It uses linear regression analysis on data from a national survey 1,500 adults (18+) examine factors that predict belief narratives regarding Russia–Ukraine war. Our research finds Pro-Kremlin is politically motivated linked users who: (1) hold conservative views, (2) trust partisan media, (3) frequently share political opinions media. findings also show exposure positively associated Conversely, mainstream negatively disinformation, offering potential way mitigate impact.

Язык: Английский

Процитировано

12