A behaviourally informed chatbot increases vaccination rates in Argentina more than a one-way reminder DOI Creative Commons
Dan Brown, Adelaida Barrera, Lorena Itatí Ibañez

и другие.

Nature Human Behaviour, Год журнала: 2024, Номер unknown

Опубликована: Окт. 18, 2024

Maintaining COVID-19 vaccine demand was key to ending the global health emergency. To help do this, many governments used chatbots that provided personalized information guiding people on where, when and how get vaccinated. We designed tested a WhatsApp chatbot understand whether two-way interactive messaging incorporating behaviourally informed functionalities could perform better than one-way message reminders. ran large-scale preregistered randomized controlled trial with 249,705 participants in Argentina, measuring vaccinations using Ministry of Health records. The more tripled uptake compared control group (a 1.6 percentage point increase (95% confidence interval, (1.36 pp, 1.77 pp)) nearly doubled reminder 1 (0.83 1.17 pp)). Communications tools simplify user journey can vaccination traditional reminders may have applications other behaviours.

Язык: Английский

Are chatbots the new relationship experts? Insights from three studies DOI Creative Commons
Laura M. Vowels

Computers in Human Behavior Artificial Humans, Год журнала: 2024, Номер 2(2), С. 100077 - 100077

Опубликована: Июнь 7, 2024

Relationship distress is among the most important predictors of individual distress. Over one in three couples report relationships but despite distress, only rarely seek help from couple therapists and instead prefer to information advice online. The recent breakthroughs development humanlike artificial intelligence-powered chatbots such as ChatGPT have recently made it possible develop which respond therapeutically. Early research suggests that they outperform physicians helpfulness empathy answering health-related questions. However, we do not yet know how well questions about relationships. Across studies, evaluated performance responding relationship-related engaging a single session relationship therapy. In Studies 1 2, demonstrated are perceived more helpful empathic than experts Study 3, showed rate sessions with chatbot high on attributes empathy, active listening, exploration. Limitations include repetitive inadequate assessment risk. findings show potential using support highlight limitations need be addressed before can safely adopted for interventions.

Язык: Английский

Процитировано

7

Does the Digital Therapeutic Alliance Exist?: An Integrative Review (Preprint) DOI Creative Commons

Amylie Malouin-Lachance,

Julien Capolupo,

Chloé Laplante

и другие.

JMIR Mental Health, Год журнала: 2025, Номер 12, С. e69294 - e69294

Опубликована: Янв. 17, 2025

Abstract Background Mental health disorders significantly impact global populations, prompting the rise of digital mental interventions, such as artificial intelligence (AI)-powered chatbots, to address gaps in access care. This review explores potential for a “digital therapeutic alliance (DTA),” emphasizing empathy, engagement, and alignment with traditional principles enhance user outcomes. Objective The primary objective this was identify key concepts underlying DTA AI-driven psychotherapeutic interventions health. secondary propose an initial definition based on these identified concepts. Methods PRISMA (Preferred Reporting Items Systematic Reviews Meta-Analyses) scoping reviews Tavares de Souza’s integrative methodology were followed, encompassing systematic literature searches Medline, Web Science, PsycNet, Google Scholar. Data from eligible studies extracted analyzed using Horvath et al’s conceptual framework alliance, focusing goal alignment, task agreement, bond, quality assessed Newcastle-Ottawa Scale Cochrane Risk Bias Tool. Results A total 28 pool 1294 articles after excluding duplicates ineligible studies. These informed development DTA, elements facilitators barriers affecting primarily focused AI-powered psychotherapy, other tools. Conclusions findings provide foundational concept report its replicate mechanisms trust, collaboration While shows promise enhancing accessibility engagement care, further research innovation are needed challenges personalization, ethical concerns, long-term impact.

Язык: Английский

Процитировано

1

"This Chatbot Would Never...": Perceived Moral Agency of Mental Health Chatbots DOI Creative Commons
Joel Wester, Henning Pohl, Simo Hosio

и другие.

Proceedings of the ACM on Human-Computer Interaction, Год журнала: 2024, Номер 8(CSCW1), С. 1 - 28

Опубликована: Апрель 17, 2024

Despite repeated reports of socially inappropriate and dangerous chatbot behaviour, chatbots are increasingly used as mental health services in providing support for young people. In sensitive settings such, the notion perceived moral agency (PMA) is crucial, given its critical role human-human interactions. this paper, we investigate PMA human-chatbot Specifically, seek to understand how influence perception trust, likeability, safety across two distinct age groups. We conduct an online experiment(N = 279)to evaluate with low high targeted towards teenagers adults. Our results indicate increased displaying PMA. A qualitative analysis revealed four themes, assessing participants' expectations general, well teenagers: Anthropomorphism, Warmth, Sensitivity, Appearance manifestation. show that plays a crucial influencing perceptions provide recommendations designing appropriate chatbots.

Язык: Английский

Процитировано

6

The Goldilocks Zone: Finding the right balance of user and institutional risk for suicide-related generative AI queries DOI Creative Commons
Anna Van Meter, Michael G. Wheaton, Victoria E. Cosgrove

и другие.

PLOS Digital Health, Год журнала: 2025, Номер 4(1), С. e0000711 - e0000711

Опубликована: Янв. 8, 2025

Generative artificial intelligence (genAI) has potential to improve healthcare by reducing clinician burden and expanding services, among other uses. There is a significant gap between the need for mental health care available clinicians in United States–this makes it an attractive target improved efficiency through genAI. Among most sensitive topics suicide, demand crisis intervention grown recent years. We aimed evaluate quality of genAI tool responses suicide-related queries. entered 10 queries into five tools–ChatGPT 3.5, GPT-4, version GPT-4 safe protected information, Gemini, Bing Copilot. The response each query was coded on seven metrics including presence suicide hotline number, content related evidence-based interventions, supportive content, harmful content. Pooling across tools, (79%) were supportive. Only 24% included number only 4% consistent with prevention interventions. Harmful rare (5%); all such instances delivered Our results suggest that developers have taken very conservative approach constrained their models’ support-seeking, but little else. Finding balance providing much needed information without introducing excessive risk within capabilities developers. At this nascent stage integrating tools systems, ensuring parity should be goal organizations.

Язык: Английский

Процитировано

0

Evaluating the Efficacy of Amanda: A Voice-Based Large Language Model Chatbot for Relationship Challenges DOI Creative Commons
Laura M. Vowels, Shannon M. Sweeney, Matthew J. Vowels

и другие.

Computers in Human Behavior Artificial Humans, Год журнала: 2025, Номер unknown, С. 100141 - 100141

Опубликована: Март 1, 2025

Язык: Английский

Процитировано

0

GymBuddy and Elomia, AI-integrated applications, effects on the mental health of the students with psychological disorders DOI Creative Commons

Jing Jiang,

Yang Yang

BMC Psychology, Год журнала: 2025, Номер 13(1)

Опубликована: Апрель 8, 2025

Digital mental health interventions, including AI-integrated applications, are increasingly utilized to support individuals with elevated symptoms of psychological distress. However, a gap exists in understanding their efficacy specifically for student populations. This study aimed investigate the effects GymBuddy, an AI-powered fitness and accountability app, Elomia, AI-based chatbot, on students at risk A quasi-experimental was conducted involving 65 participants who exhibited heightened distress but did not have formal diagnosis disorder. Participants were randomly assigned either intervention group, which GymBuddy Elomia structured support, or control group. Mental outcomes such as anxiety, depression, stress levels assessed using standardized baseline, midpoint, endpoint measures. Data analyzed Mixed ANOVA. The mixed ANOVA analysis revealed significant improvements across all measured outcomes, somatic symptoms, anxiety insomnia, social dysfunction, severe depression. Significant main time group membership observed variables, indicating overall symptom reduction baseline differences between groups. Moreover, interaction (F(2, 70) = 59.96, p < 0.0001, η² 0.63), insomnia 32.05, 0.48), dysfunction depression 0.48) indicated that experienced significantly greater reductions compared Our findings suggest interventions like may serve effective tools reducing Integrating AI technology into offers personalized guidance, addressing crucial need Further research is warranted explore long-term optimize implementation these educational settings.

Язык: Английский

Процитировано

0

vFerryman: An Artificial Intelligence-Driven Personalized Companion Providing Calming Visuals and Social Interaction for Emotional Well-Being DOI Creative Commons
Wei-Shen Wang

Опубликована: Апрель 26, 2025

Язык: Английский

Процитировано

0

Chat GPT and suicide prevention – can it work? A conversation analysis DOI Creative Commons
Przemysław M. Waszak

Psychiatria i Psychologia Kliniczna, Год журнала: 2025, Номер 24(4), С. 292 - 299

Опубликована: Апрель 30, 2025

Introduction and objective: Suicide is a critical global health concern, prioritised by the World Health Organization. Chatbot-based tools using artificial intelligence (AI) have emerged as potential aids in suicide prevention. This study explores use of ChatGPT, an advanced AI language model, handling conversations related to suicide. Materials methods: Conversations were simulated basic ChatGPT account, mimicking interactions with individuals expressing suicidal thoughts. Topics included inquiries about methods, seeking help, supporting others crisis. ChatGPT’s responses analysed for their supportive nature guidance. The also investigated feasibility circumventing restrictions, known “jailbreaking”. Results: responded queries outwardly warmth messages, encouraging users seek professional help providing information on helplines, mental organisations, finding qualified therapists. It empathy, active listening, intervention. Notably, simple jailbreaking technique allowed provide specific drugs misuse scenarios, posing significant concerns. Conclusions: While shows promise prevention, this underscores importance recognising its limitations, such lack genuine empathy contextual understanding responses. Risks include inappropriate or harmful inability accurately assess risk. may serve valuable tool prevention efforts, but ethical frameworks regulations are crucial safe development deployment care.

Язык: Английский

Процитировано

0

AI in Relationship Counselling: Evaluating ChatGPT’s Therapeutic Capabilities in Providing Relationship Advice DOI Open Access
Laura M. Vowels, Rachel R. R. Francois‐Walcott, Joëlle Darwiche

и другие.

Опубликована: Окт. 29, 2023

Recent advancements in AI have led to chatbots, such as ChatGPT, capable of providing therapeutic responses. Early research evaluating chatbots’ ability provide relationship advice and single-session interventions has showed that both laypeople therapists rate them high on attributed empathy helpfulness. In the present study, 20 participants engaged intervention with ChatGPT were interviewed about their experiences. We evaluated performance comprising technical outcomes error linguistic accuracy quality questioning. The interviews analysed using reflexive thematic analysis which generated four themes: light at end tunnel; clearing fog; clinical skills; setting. analyses feasibility outcomes, coded by researchers perceived users, show provides realistic it consistently rated highly attributes skills, human-likeness, exploration, useability, clarity next steps for users’ problem. Limitations include a poor assessment risk reaching collaborative solutions participant. This study extends acceptance theories highlights potential capabilities support.

Язык: Английский

Процитировано

6

Chatbots in Psychology: Revolutionizing Clinical Support and Mental Health Care DOI Open Access
Rocco de Filippis, Abdullah Al Foysal

Voice of the Publisher, Год журнала: 2024, Номер 10(03), С. 298 - 321

Опубликована: Янв. 1, 2024

Язык: Английский

Процитировано

2