Leveraging Chatbots to Combat Health Misinformation for Older Adults: Participatory Design Study (Preprint) DOI
Wei Peng, Hee Rin Lee, Sue Lim

и другие.

Опубликована: Май 20, 2024

BACKGROUND Older adults, a population particularly susceptible to misinformation, may experience attempts at health-related scams or defrauding, and they unknowingly spread misinformation. Previous research has investigated managing misinformation through media literacy education supporting users by fact-checking information cautioning for potential content, yet studies focusing on older adults are limited. Chatbots have the educate support in management. However, many designing technology use needs-based approach consider aging as deficit, leading issues adoption. Instead, we adopted asset-based approach, inviting be active collaborators envisioning how intelligent technologies can enhance their management practices. OBJECTIVE This study aims understand chatbots’ capabilities METHODS We conducted 5 participatory design workshops with total of 17 adult participants ideate ways which chatbots help them manage The included 3 stages: developing scenarios reflecting adults’ encounters lives, understanding existing chatbot platforms, intervene from stage 1. RESULTS found that arose more interpersonal relationships than individuals’ ability detect pieces information. finding underscored importance act mediators facilitate communication resolve conflict. In addition, emphasized autonomy. They desired teach navigate landscape come conclusions about own. Finally, distrust IT companies governments’ regulate industry affected trust chatbots. Thus, designers should using well-trusted sources practicing transparency increase chatbot-based tools. Overall, our results highlight need tools go beyond fact checking. CONCLUSIONS provides insights designed part technological systems among adults. Our underscores co-designers interventions.

Язык: Английский

Revolutionizing generative pre-traineds: Insights and challenges in deploying ChatGPT and generative chatbots for FAQs DOI
Feriel Khennouche, Youssef Elmir, Yassine Himeur

и другие.

Expert Systems with Applications, Год журнала: 2024, Номер 246, С. 123224 - 123224

Опубликована: Янв. 19, 2024

Язык: Английский

Процитировано

35

Revealing the complexity of users’ intention to adopt healthcare chatbots: A mixed-method analysis of antecedent condition configurations DOI
Xiwei Wang,

Ran Luo,

Yutong Liu

и другие.

Information Processing & Management, Год журнала: 2023, Номер 60(5), С. 103444 - 103444

Опубликована: Июль 6, 2023

Язык: Английский

Процитировано

27

Better interaction performance attracts more chronic patients? Evidence from an online health platform DOI
Huan Liu, Yao Zhang, Yuelin Li

и другие.

Information Processing & Management, Год журнала: 2023, Номер 60(4), С. 103413 - 103413

Опубликована: Май 25, 2023

Язык: Английский

Процитировано

15

The way you assess matters: User interaction design of survey chatbots for mental health DOI
Yucheng Jin, Li Chen, Xianglin Zhao

и другие.

International Journal of Human-Computer Studies, Год журнала: 2024, Номер 189, С. 103290 - 103290

Опубликована: Май 22, 2024

Язык: Английский

Процитировано

4

Harnessing AI to Address Misinformation on Cultivated Meat: The Impact of Chatbot Expertise and Correction Sidedness DOI Creative Commons
Mengxue Ou, Shirley S. Ho, Stanley A. Wijaya

и другие.

Science Communication, Год журнала: 2025, Номер unknown

Опубликована: Фев. 11, 2025

This study conducted a 2 (chatbot expertise cues: high vs. low) × (correction sidedness: one-sided two-sided) between-subjects experiment to investigate the effects of chatbot features and output framing on misinformation correction public perceptions cultivated meat. Results highlighted importance cues in shaping credibility correction, with two-sided corrections from high-expertise chatbots perceived as more credible than those low-expertise chatbots. Credibility mediated interaction effect between sidedness individuals’ attitudes consumption intentions towards Both theoretical practical implications these findings were discussed.

Язык: Английский

Процитировано

0

Designing Chatbots for Misinformation Correction: Examining the Roles of Chatbot Expertise and Anthropomorphism DOI Creative Commons
Shirley S. Ho, Stanley A. Wijaya, Mengxue Ou

и другие.

International Journal of Human-Computer Interaction, Год журнала: 2025, Номер unknown, С. 1 - 15

Опубликована: Апрель 29, 2025

Язык: Английский

Процитировано

0

Social Media Misinformation Wars: How Message Features, Political Cynicism, and Conspiracy Beliefs Shape Government-Led Public Health Debunking Effectiveness DOI
Xinzhi Zhang, Tai‐Quan Peng, Qinfeng Zhu

и другие.

Journalism & Mass Communication Quarterly, Год журнала: 2025, Номер unknown

Опубликована: Май 14, 2025

This study investigates the effectiveness of public health institutions’ misinformation debunking on social media by examining impact message features—social intermediaries, framing, and cues—alongside moderating roles political cynicism conspiracy beliefs. We conducted preregistered survey experiments in Hong Kong, Netherlands, United States (total N = 2,769). Results show that sponsored messages outperformed AI recommendations. Causal framing would backfire for cynics (in both Kong Netherlands). In States, peer-shared enhanced source evaluations among those with higher

Язык: Английский

Процитировано

0

Navigating Technological Shifts: An Examination of User Inertia and Technology Prestige in Large-Language-Model AI Chatbot Transition DOI
Yipeng Xi

International Journal of Human-Computer Interaction, Год журнала: 2024, Номер unknown, С. 1 - 17

Опубликована: Сен. 30, 2024

Язык: Английский

Процитировано

3

The media literacy dilemma: can ChatGPT facilitate the discernment of online health misinformation? DOI Creative Commons
Wei Peng, Jingbo Meng,

Tsai-Wei Ling

и другие.

Frontiers in Communication, Год журнала: 2024, Номер 9

Опубликована: Ноя. 29, 2024

Online health misinformation carries serious social and public implications. A growing prevalence of sophisticated online employs advanced persuasive tactics, making discernment progressively more challenging. Enhancing media literacy is a key approach to improving the ability discern misinformation. The objective current study was examine feasibility using generative AI dissect tactics as scaffolding tool facilitate discernment. In mixed 3 (media tool: control vs. National Library Medicine [NLM] checklist ChatGPT tool) × 2 (information type: true information misinformation) evaluation difficulty: hard easy) experiment, we found that dissecting strategies can be equally effective when compared with NLM checklist, type significant moderator such in helping people identify than However, performed worse terms No difference perceived usefulness future use intention checklist. results suggest interactive or conversational features might enhance tool.

Язык: Английский

Процитировано

2

The influence of communicating agent on users’ willingness to interact: A moderated mediation model DOI Creative Commons
Qi Zhou, Bin Li

Cyberpsychology Journal of Psychosocial Research on Cyberspace, Год журнала: 2024, Номер 18(2)

Опубликована: Апрель 11, 2024

Empowered by AI, chatbots are increasingly being integrated to interact with users in one-on-one communication. However, academic scrutiny on the impact of online interaction is lacking. This study aims fill gap applying self-presentation theory (presenting desired self-impression others) explore how communicating agent (chatbot vs. human agent) interactive marketing influences users’ willingness, as well moderating roles public self-consciousness (sense apprehension over concern) and sensitive information disclosure (private linked an individual). The results three experimental studies indicate that can improve willingness mitigating concern. Further, moderated interactions. These effects were particularly impactful for higher situations disclosure. findings provide theoretical practical implications human-chatbot interaction, chatbot strategy, application

Язык: Английский

Процитировано

1