A behaviourally informed chatbot increases vaccination rates in Argentina more than a one-way reminder DOI Creative Commons
Dan Brown, Adelaida Barrera, Lorena Itatí Ibañez

и другие.

Nature Human Behaviour, Год журнала: 2024, Номер unknown

Опубликована: Окт. 18, 2024

Maintaining COVID-19 vaccine demand was key to ending the global health emergency. To help do this, many governments used chatbots that provided personalized information guiding people on where, when and how get vaccinated. We designed tested a WhatsApp chatbot understand whether two-way interactive messaging incorporating behaviourally informed functionalities could perform better than one-way message reminders. ran large-scale preregistered randomized controlled trial with 249,705 participants in Argentina, measuring vaccinations using Ministry of Health records. The more tripled uptake compared control group (a 1.6 percentage point increase (95% confidence interval, (1.36 pp, 1.77 pp)) nearly doubled reminder 1 (0.83 1.17 pp)). Communications tools simplify user journey can vaccination traditional reminders may have applications other behaviours.

Язык: Английский

An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews DOI Creative Commons
Md Romael Haque, Sabirat Rubya

JMIR mhealth and uhealth, Год журнала: 2023, Номер 11, С. e44838 - e44838

Опубликована: Апрель 21, 2023

Chatbots are an emerging technology that show potential for mental health care apps to enable effective and practical evidence-based therapies. As this is still relatively new, little known about recently developed their characteristics effectiveness.In study, we aimed provide overview of the commercially available popular chatbots how they perceived by users.We conducted exploratory observation 10 offer support treatment a variety concerns with built-in chatbot feature qualitatively analyzed 3621 consumer reviews from Google Play Store 2624 Apple App Store.We found although chatbots' personalized, humanlike interactions were positively received users, improper responses assumptions personalities users led loss interest. always accessible convenient, can become overly attached them prefer over interacting friends family. Furthermore, may crisis whenever user needs it because its 24/7 availability, but even lack understanding properly identifying crisis. considered in study fostered judgment-free environment helped feel more comfortable sharing sensitive information.Our findings suggest have great social psychological situations where real-world human interaction, such as connecting or family members seeking professional support, not preferred possible achieve. However, there several restrictions limitations these must establish according level service offer. Too much reliance on pose risks, isolation insufficient assistance during times Recommendations customization balanced persuasion inform design been outlined based insights our findings.

Язык: Английский

Процитировано

150

I am attracted to my Cool Smart Assistant! Analyzing Attachment-Aversion in AI-Human Relationships DOI Creative Commons
João Guerreiro, Sandra María Correia Loureiro

Journal of Business Research, Год журнала: 2023, Номер 161, С. 113863 - 113863

Опубликована: Март 21, 2023

The conversation between humans and Artificial Intelligence (AI)-enabled intelligent voice assistants (IVA) can create bonds that go beyond a mere utilitarian purpose. emotional cues in Human-AI lead consumers to feel connected with the AI-agents even consider such relationship as cool. Although brand coolness is known affect consumer behavior, little about how perceive close IVAs what drivers of their use or avoidance are. Therefore, current paper adds literature by analyzing AI-enabled assistant experience affects IVA customer-brand relationships using attachment-aversion (A-A) theory. A total 308 showed affective, behavioral, intellectual experiences coolness. was also found A-A positively, influencing consumers' motivational strength adopt, maintain enhance future.

Язык: Английский

Процитировано

45

Roles, Users, Benefits, and Limitations of Chatbots in Health Care: Rapid Review DOI Creative Commons
Moustafa Laymouna, Yuanchao Ma, David Lessard

и другие.

Journal of Medical Internet Research, Год журнала: 2024, Номер 26, С. e56930 - e56930

Опубликована: Апрель 12, 2024

Background Chatbots, or conversational agents, have emerged as significant tools in health care, driven by advancements artificial intelligence and digital technology. These programs are designed to simulate human conversations, addressing various care needs. However, no comprehensive synthesis of chatbots’ roles, users, benefits, limitations is available inform future research application the field. Objective This review aims describe characteristics, focusing on their diverse roles pathway, user groups, limitations. Methods A rapid published literature from 2017 2023 was performed with a search strategy developed collaboration sciences librarian implemented MEDLINE Embase databases. Primary studies reporting chatbot benefits were included. Two reviewers dual-screened results. Extracted data subjected content analysis. Results The categorized into 2 themes: delivery remote services, including patient support, management, education, skills building, behavior promotion, provision administrative assistance providers. User groups spanned across patients chronic conditions well cancer; individuals focused lifestyle improvements; demographic such women, families, older adults. Professionals students also alongside seeking mental behavioral change, educational enhancement. chatbots classified improvement quality efficiency cost-effectiveness delivery. identified encompassed ethical challenges, medicolegal safety concerns, technical difficulties, experience issues, societal economic impacts. Conclusions Health offer wide spectrum applications, potentially impacting aspects care. While they promising for improving quality, integration system must be approached consideration ensure optimal, safe, equitable use.

Язык: Английский

Процитировано

36

Putting AI to the test DOI Open Access

OECD

OECD education spotlights, Год журнала: 2023, Номер unknown

Опубликована: Июль 13, 2023

Advancements in artificial intelligence (AI) are laying the groundwork for extensive and rapid transformations society. Understanding relationship between AI capabilities human skills is essential to ensure policy responsiveness ongoing incoming changes. The OECD has tracked how well systems fare on tasks from Programme International Student Assessment (PISA), comparing performance that of 15-year-old students test’s core domains reading, mathematics science. Tests were conducted using Generative Pre-Trained Transformer (GPT) family large language models (LLMs), behind ChatGPT, which took world by storm after its public release late 2022. Results show both GPT versions outperform average student reading In addition, we observe advances where quickly catching up with those students. November 2022, GPT-3.5 could answer 35% a set PISA tasks, level significantly below humans, who 51% successfully average. However, March 2023, GPT-4 answered 40% successfully. Policy implications these results discussed this paper.

Язык: Английский

Процитировано

36

The Opportunities and Risks of Large Language Models in Mental Health DOI Creative Commons
Hannah R. Lawrence,

Renee Schneider,

Susan B. Rubin

и другие.

JMIR Mental Health, Год журнала: 2024, Номер 11, С. e59479 - e59479

Опубликована: Июль 29, 2024

Abstract Global rates of mental health concerns are rising, and there is increasing realization that existing models care will not adequately expand to meet the demand. With emergence large language (LLMs) has come great optimism regarding their promise create novel, large-scale solutions support health. Despite nascence, LLMs have already been applied health–related tasks. In this paper, we summarize extant literature on efforts use provide education, assessment, intervention highlight key opportunities for positive impact in each area. We then risks associated with LLMs’ application encourage adoption strategies mitigate these risks. The urgent need must be balanced responsible development, testing, deployment LLMs. It especially critical ensure fine-tuned health, enhance equity, adhere ethical standards people, including those lived experience concerns, involved all stages from development through deployment. Prioritizing minimize potential harms maximize likelihood positively globally.

Язык: Английский

Процитировано

14

The Role of AI in Peer Support for Young People: A Study of Preferences for Human- and AI-Generated Responses DOI Open Access
Jordyn Young, Laala M Jawara, Diep N. Nguyen

и другие.

Опубликована: Май 11, 2024

Generative Artificial Intelligence (AI) is integrated into everyday technology, including news, education, and social media. AI has further pervaded private conversations as conversational partners, auto-completion, response suggestions. As media becomes young people's main method of peer support exchange, we need to understand when how can facilitate assist in such exchanges a beneficial, safe, socially appropriate way. We asked 622 people complete an online survey evaluate blinded human- AI-generated responses help-seeking messages. found that participants preferred the situations about relationships, self-expression, physical health. However, addressing sensitive topic, like suicidal thoughts, human response. also discuss role training exchange its implications for supporting well-being. Disclaimer: This paper includes topics, suicide ideation. Reader discretion advised.

Язык: Английский

Процитировано

13

Development of an intelligent hospital information chatbot and evaluation of its system usability DOI
Tai-Liang Chen, Chao‐Hung Kuo,

Chun-Hung Chen

и другие.

Enterprise Information Systems, Год журнала: 2025, Номер unknown

Опубликована: Фев. 25, 2025

Язык: Английский

Процитировано

1

Promoting Cognitive Health in Elder Care with Large Language Model-Powered Socially Assistive Robots DOI
Maria R. Lima, Amy O'Connell,

F.B. Zhou

и другие.

Опубликована: Апрель 24, 2025

Язык: Английский

Процитировано

1

Supporting the Demand on Mental Health Services with AI-Based Conversational Large Language Models (LLMs) DOI Creative Commons
Tin Lai, Yao Eric Shi,

Zicong Du

и другие.

BioMedInformatics, Год журнала: 2023, Номер 4(1), С. 8 - 33

Опубликована: Дек. 22, 2023

The demand for psychological counselling has grown significantly in recent years, particularly with the global outbreak of COVID-19, which heightened need timely and professional mental health support. Online emerged as predominant mode providing services response to this demand. In study, we propose Psy-LLM framework, an AI-based assistive tool leveraging large language models (LLMs) question answering consultation settings ease on professions. Our framework combines pre-trained LLMs real-world questions-and-answers (Q&A) from psychologists extensively crawled articles. serves a front-end healthcare professionals, allowing them provide immediate responses mindfulness activities alleviate patient stress. Additionally, it functions screening identify urgent cases requiring further assistance. We evaluated using intrinsic metrics, such perplexity, extrinsic evaluation including human participant assessments helpfulness, fluency, relevance, logic. results demonstrate effectiveness generating coherent relevant answers questions. This article discusses potential limitations enhance support through AI technologies.

Язык: Английский

Процитировано

23

AI in relationship counselling: Evaluating ChatGPT's therapeutic capabilities in providing relationship advice DOI Creative Commons
Laura M. Vowels, Rachel R. R. Francois‐Walcott, Joëlle Darwiche

и другие.

Computers in Human Behavior Artificial Humans, Год журнала: 2024, Номер 2(2), С. 100078 - 100078

Опубликована: Июнь 21, 2024

Recent advancements in AI have led to chatbots, such as ChatGPT, capable of providing therapeutic responses. Early research evaluating chatbots' ability provide relationship advice and single-session interventions has showed that both laypeople therapists rate them high on attributed empathy helpfulness. In the present study, 20 participants engaged intervention with ChatGPT were interviewed about their experiences. We evaluated performance comprising technical outcomes error linguistic accuracy quality questioning. The interviews analysed using reflexive thematic analysis which generated four themes: light at end tunnel; clearing fog; clinical skills; setting. analyses feasibility outcomes, coded by researchers perceived users, show provides realistic it consistently rated highly attributes skills, human-likeness, exploration, useability, clarity next steps for users' problem. Limitations include a poor assessment risk reaching collaborative solutions participant. This study extends acceptance theories highlights potential capabilities support.

Язык: Английский

Процитировано

8