Perceived legitimacy of layperson and expert content moderators DOI Open Access
Cameron Martel, Adam J. Berinsky, David G. Rand

et al.

Published: June 23, 2024

Content moderation is a critical aspect of platform governance on social media and particular relevance to addressing the belief in spread misinformation. However, current content practices have been criticized as unjust. This raises an important question – who do Americans want deciding whether online harmfully misleading? We conducted nationally representative conjoint survey experiment (N=3,000) which U.S. participants evaluated legitimacy hypothetical juries tasked with evaluating was misleading. These varied they were described consisting experts (e.g., domain experts), laypeople users), or non-juries computer algorithm). also randomized features jury composition (size, necessary qualifications) engaged discussion during evaluation. Overall, expert more legitimate than layperson algorithm. modifying helped increase perceptions politically balanced enhanced legitimacy, did increased size, individual juror knowledge qualifications, enabling discussion., Maximally comparably panels. Republicans perceived less compared Democrats, but still baseline juries. Conversely, larger lay news qualifications across political spectrum. Our findings shed light foundations procedural implications for design systems.

Language: Английский

Misinformation warning labels are widely effective: A review of warning effects and their moderating features DOI
Cameron Martel, David G. Rand

Current Opinion in Psychology, Journal Year: 2023, Volume and Issue: 54, P. 101710 - 101710

Published: Oct. 19, 2023

Language: Английский

Citations

42

Concept Induction: Analyzing Unstructured Text with High-Level Concepts Using LLooM DOI Open Access
Michelle S. Lam, J. E. M. Teoh, James A. Landay

et al.

Published: May 11, 2024

Data analysts have long sought to turn unstructured text data into meaningful concepts. Though common, topic modeling and clustering focus on lower-level keywords require significant interpretative work. We introduce concept induction, a computational process that instead produces high-level concepts, defined by explicit inclusion criteria, from text. For dataset of toxic online comments, where state-of-the-art BERTopic model outputs "women, power, female," induction concepts such as "Criticism traditional gender roles" "Dismissal women's concerns." present LLooM, algorithm leverages large language models iteratively synthesize sampled propose human-interpretable increasing generality. then instantiate LLooM in mixed-initiative analysis tool, enabling shift their attention interpreting topics engaging theory-driven analysis. Through technical evaluations four scenarios ranging literature review content moderation, we find LLooM's improve upon the prior art terms quality coverage. In expert case studies, helped researchers uncover new insights even familiar datasets, for example suggesting previously unnoticed attacks out-party stances political social media dataset.

Language: Английский

Citations

14

Engagement, user satisfaction, and the amplification of divisive content on social media DOI Creative Commons
Smitha Milli, Micah Carroll,

Yike Wang

et al.

PNAS Nexus, Journal Year: 2025, Volume and Issue: 4(3)

Published: Feb. 27, 2025

Abstract Social media ranking algorithms typically optimize for users’ revealed preferences, i.e. user engagement such as clicks, shares, and likes. Many have hypothesized that by focusing on these may exacerbate human behavioral biases. In a preregistered algorithmic audit, we found that, relative to reverse-chronological baseline, Twitter’s engagement-based algorithm amplifies emotionally charged, out-group hostile content users say makes them feel worse about their political out-group. Furthermore, find do not prefer the tweets selected algorithm, suggesting underperforms in satisfying stated preferences. Finally, explore implications of an alternative approach ranks based preferences reduction angry, partisan, content, but also potential reinforcement proattitudinal content. Overall, our findings suggest greater integration into social could promote better online discourse, though trade-offs warrant further investigation.

Language: Английский

Citations

1

People who share encounters with racism are silenced online by humans and machines, but a guideline-reframing intervention holds promise DOI Creative Commons
Cinoo Lee, Kristina Gligorić, Pratyusha Kalluri

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2024, Volume and Issue: 121(38)

Published: Sept. 9, 2024

Are members of marginalized communities silenced on social media when they share personal experiences racism? Here, we investigate the role algorithms, humans, and platform guidelines in suppressing disclosures racial discrimination. In a field study actual posts from neighborhood-based platform, find that users talk about their as targets racism, are disproportionately flagged for removal toxic by five widely used moderation algorithms major online platforms, including most recent large language models. We show human flag these well. Next, follow-up experiment, demonstrate merely witnessing such suppression negatively influences how Black Americans view community place it. Finally, to address challenges equity inclusion spaces, introduce mitigation strategy: guideline-reframing intervention is effective at reducing silencing behavior across political spectrum.

Language: Английский

Citations

4

BleacherBot: AI Agent as a Sports Co-Viewing Partner DOI
Kyusik Kim, Hyungwoo Song, Jeongwoo Ryu

et al.

Published: April 25, 2025

Language: Английский

Citations

0

Governing the large language model commons: using digital assets to endow intellectual property rights DOI
Christos Makridis, Joshua Ammons

Journal of Institutional Economics, Journal Year: 2025, Volume and Issue: 21

Published: Jan. 1, 2025

Abstract The emergence of large language models (LLMs) has made it increasingly difficult to protect and enforce intellectual property (IP) rights in a digital landscape where content can be easily accessed utilized without clear authorization. First, we explain why LLMs make uniquely IP, creating ‘tragedy the commons.’ Second, drawing on theories polycentric governance, argue that non-fungible tokens (NFTs) could effective tools for addressing complexities IP rights. Third, provide an illustrative case study shows how NFTs facilitate dispute resolution blockchain.

Language: Английский

Citations

0

Attraction to politically extreme users on social media DOI Creative Commons
Federico Zimmerman, David H. Bailey, Goran Murić

et al.

PNAS Nexus, Journal Year: 2024, Volume and Issue: 3(10)

Published: Oct. 1, 2024

Abstract Political segregation is a pressing issue, particularly on social media platforms. Recent research suggests that one driver of political acrophily—people's preference for others in their group who have more extreme (rather than moderate) views. However, acrophily has been found lab experiments, where people choose to interact with based little information. Furthermore, these studies not examined whether associated animosity toward one's out-group. Using combination survey experiment (N = 388) and an analysis the retweet network Twitter (3,898,327 unique ties), we find evidence users' tendency context media. We observe this pronounced among conservatives higher levels out-group animosity. These findings provide important in- out-of-the-lab understanding

Language: Английский

Citations

3

Perceived legitimacy of layperson and expert content moderators DOI Creative Commons
Cameron Martel, Adam J. Berinsky, David G. Rand

et al.

PNAS Nexus, Journal Year: 2025, Volume and Issue: 4(5)

Published: April 30, 2025

Content moderation is a critical aspect of platform governance on social media and particular relevance to addressing the belief in spread misinformation. However, current content practices have been criticized as unjust. This raises an important question-who do Americans want deciding whether online harmfully misleading? We conducted nationally representative survey experiment (n = 3,000) which US participants evaluated legitimacy hypothetical juries tasked with evaluating was misleading. These varied they were described consisting experts (e.g. domain experts), laypeople users), or nonjuries computer algorithm). also randomized features jury composition (size necessary qualifications) engaged discussion during evaluation. Overall, expert more legitimate than layperson algorithm. modifying helped increase perceptions-nationally politically balanced enhanced legitimacy, did increased size, individual juror knowledge qualifications, enabling discussion. Maximally comparably panels. Republicans perceived less compared Democrats, but still baseline juries. Conversely, larger lay news qualifications who across political spectrum. Our findings shed light foundations institutional implications for design systems.

Language: Английский

Citations

0

Embedding Societal Values into Social Media Algorithms DOI Creative Commons
Michael S. Bernstein, Angéle Christin, Jeffrey T. Hancock

et al.

Journal of Online Trust and Safety, Journal Year: 2023, Volume and Issue: 2(1)

Published: Sept. 21, 2023

Social media influences what we see and hear, believe, how act-but artificial intelligence (AI) social media.By changing our environments, AIs change behavior: as per Winston Churchill, "We shape buildings; thereafter, they us."Across billions of people on platforms from Facebook to Twitter YouTube TikTok, AI decides is at the top feeds (Backstrom 2016; Fischer 2020), who might connect with (Guy, Ronen, Wilcox 2009), should be moderated, labeled a warning, or outright removed (Gillespie 2018).These models environment around us by amplifying removing misinformation radicalizing content (Hassan et al. 2015), highlighting suppressing antisocial behavior such harassment (Lees 2022), upranking downranking that harm well-being (Burke, Cheng, Gant 2020).How do understand engineer this sociotechnical ouroboros (Mansoury 2020)?As traditional critique goes, these challenges arise because are optimized for engagement Narayanan 2023).But not full story: help manage undesirable outcomes engagement-based algorithms, have long augmented their algorithms 1 nonengagement (Eckles 2021).For instance, defeat clickbait, began surveying users opinions specific posts, then building could predict downrank posts dislike, even if likely click them Mosseri 2015).To ensure all receive feedback, designed weighing effect user feedback other otherwise get few replies (Eckles, Kizilcec, Bakshy 2016).To diminish prevalence violates community standards, gore, built paid moderation teams flag remove content.This battery surveys, moderation, downranking, peer estimation, now components many 2021).1.In commentary, refer "AI" "algorithm" interchangeably machine learning procedures learn large-scale data.We primarily concerned focused ranking recommendation, especially feed but note play roles well, including (de)monetization, tagging, political toxicity judgments.

Language: Английский

Citations

8

Tweeting “in the language they understand”: a peace journalism conception of political contexts and media narratives on Nigeria's Twitter ban DOI
Ahmad Muhammad Auwal, Metin Ersoy

Media International Australia, Journal Year: 2024, Volume and Issue: unknown

Published: Sept. 19, 2024

The rise of social media usage has generated global debates over efforts to address widening concerns through moderation user practices and content that potentially undermine public safety security. Content become a politically contested issue globally, while also attracting more attention across Africa Nigeria in recent times. A case point is the seven-month ban imposed on Twitter by immediate-past government Muhammadu Buhari, who was Nigeria's president from 2015 2023, following Twitter's decision remove tweet which Buhari referenced Nigerian Civil War appeared threaten violence against separatists June 2021. To expand ongoing about politicization use moderation, we conceive peace journalism framework synthesizing impact political communication narratives societal conflict dynamics, offering critical reflection contexts ban. theoretical lens deployed understand implications polarizing discourses originating strategies actors. We adapt indicators for versus war-oriented coverage analyze 48 journalistic articles published 10 English-language news outlets during initial three-months assess role can play mitigating or exacerbating tensions. Findings indicate Buhari's Twitter-based discourse elicits diverse perceptions his intentions, fomenting polarization, used distinctive reporting styles produce likely promote nonviolent responses escalate

Language: Английский

Citations

2