Published: May 17, 2024
Language: Английский
Published: May 17, 2024
Language: Английский
JMIR Infodemiology, Journal Year: 2024, Volume and Issue: 4, P. e60678 - e60678
Published: Sept. 26, 2024
Background During the COVID-19 pandemic, rapid spread of misinformation on social media created significant public health challenges. Large language models (LLMs), pretrained extensive textual data, have shown potential in detecting misinformation, but their performance can be influenced by factors such as prompt engineering (ie, modifying LLM requests to assess changes output). One form is role-playing, where, upon request, OpenAI’s ChatGPT imitates specific roles or identities. This research examines how ChatGPT’s accuracy COVID-19–related affected when it assigned identities request prompt. Understanding LLMs respond different identity cues inform messaging campaigns, ensuring effective use communications. Objective study investigates impact role-playing prompts misinformation. also assesses differences explicitly stated versus implied, based contextual knowledge, and reasoning given for classification decisions. Methods Overall, 36 real-world tweets about collected September 2021 were categorized into sentiment (opinions aligned vs unaligned with guidelines), corrections, neutral reporting. was tested incorporating combinations multiple political beliefs, education levels, locality, religiosity, personality traits), resulting 51,840 runs. Two control conditions used compare results: no those including only identity. Results The findings reveal that reduces average detection accuracy, a notable drop from 68.1% (SD 41.2%; identities) 29.3% 31.6%; all included). Prompts resulted lowest (19.2%, SD 29.2%). able distinguish between sentiments expressing opinions not guidelines making declarative statements. There consistent explicit implicit requiring knowledge. While show inclusion decreased remains uncertain whether adopts views identities: conservative identity, identified nearly same did liberal mentioned most frequently explanations its decisions, rationales classifications inconsistent across conditions, contradictory provided some instances. Conclusions These results indicate ability classify negatively impacted identities, highlighting complexity integrating human biases perspectives LLMs. points need oversight detection. Further needed understand weigh prompt-based tasks explore application cultural contexts.
Language: Английский
Citations
1Published: May 17, 2024
Language: Английский
Citations
0