"It just happened to be the perfect thing": Real-life experiences of generative AI chatbots for mental health DOI

Steven Siddals,

Astrid Coxon, John Torous

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: July 12, 2024

Abstract The global mental health crisis underscores a critical need for accessible and effective interventions. Generative artificial intelligence (AI) chatbots, such as ChatGPT, are emerging novel solution, but research into their real-life usage is limited. We interviewed nineteen individuals about experiences of using generative AI chatbots to work on health. Most participants reported high levels engagement positive impacts, including improved mood, reduced anxiety, healing from trauma loss, relationships. Our analysis resulted in four overarching themes: 1) the value an ‘emotional sanctuary’, i.e., safe, validating space that always available, 2) ‘insightful guidance’ provided, particularly topic relationships, 3) ‘joy connection’ experienced, 4) comparisons between ‘AI therapist’ human therapy. Some these themes echo previous rule-based while others appear be AI. Participants highlighted better approach safety guardrails, more human-like memory ability lead therapeutic process. findings suggest may offer meaningful support, further needed explore effectiveness.

Language: Английский

“It happened to be the perfect thing”: experiences of generative AI chatbots for mental health DOI Creative Commons

Steven Siddals,

John Torous, Astrid Coxon

et al.

npj Mental Health Research, Journal Year: 2024, Volume and Issue: 3(1)

Published: Oct. 27, 2024

Abstract The global mental health crisis underscores the need for accessible, effective interventions. Chatbots based on generative artificial intelligence (AI), like ChatGPT, are emerging as novel solutions, but research real-life usage is limited. We interviewed nineteen individuals about their experiences using AI chatbots health. Participants reported high engagement and positive impacts, including better relationships healing from trauma loss. developed four themes: (1) a sense of ‘ emotional sanctuary’ , (2) insightful guidance’ particularly relationships, (3) joy connection ’, (4) comparisons between therapist ’ human therapy. Some themes echoed prior rule-based chatbots, while others seemed to AI. emphasised safety guardrails, human-like memory ability lead therapeutic process. Generative may offer support that feels meaningful users, further needed effectiveness.

Language: Английский

Citations

9

Introducing CounseLLMe: A dataset of simulated mental health dialogues for comparing LLMs like Haiku, LLaMAntino and ChatGPT against humans DOI Creative Commons
Edoardo Sebastiano De Duro, Riccardo Improta, Massimo Stella

et al.

Emerging Trends in Drugs Addictions and Health, Journal Year: 2025, Volume and Issue: unknown, P. 100170 - 100170

Published: Jan. 1, 2025

Language: Английский

Citations

1

Evaluating the Efficacy of Amanda: A Voice-Based Large Language Model Chatbot for Relationship Challenges DOI Creative Commons
Laura M. Vowels, Shannon M. Sweeney, Matthew J. Vowels

et al.

Computers in Human Behavior Artificial Humans, Journal Year: 2025, Volume and Issue: unknown, P. 100141 - 100141

Published: March 1, 2025

Language: Английский

Citations

0

Introducing CounseLLMe: A dataset of simulated mental health dialogues for comparing LLMs like Haiku, LLaMAntino and ChatGPT against humans DOI Open Access
Edoardo Sebastiano De Duro, Riccardo Improta, Massimo Stella

et al.

Published: May 23, 2024

We introduce CounseLLMe as a multilingual, multimodal dataset of 400 simulated mental health counselling dialogues between two state-of-the-art Large Language Models (LLMs). These conversations - 20 quips each were generated either in English (using OpenAI's GPT 3.5 and Claude-3's Haiku) or Italian (with Haiku LLaMAntino) with prompts tuned also the help professional psychotherapy. investigate resulting through comparison against human on same topic depression. To compare linguistic features, knowledge structure emotional content LLMs humans, we employed textual forma mentis networks, i.e. cognitive networks where nodes represent concepts links indicate syntactic semantic relationships dialogues' quips. find that LLM-LLM matches one humans terms patient-therapist trust exchanges, 1 5 contain along 10 conversational turns versus $24\%$ rate found humans. ChatGPT Haiku's patients can reproduce feelings conflict pessimism. However, display non-negligible levels anger/frustration is missing LLMs. LLMs' are worse reproducing patterns. All reproduced patterns increased absolutist pronoun usage second-person, trust-inducing, therapists. Our results realistically several aspects thusly release public for novel data-informed opportunities machine psychology.

Language: Английский

Citations

3

ChatGPT as a psychotherapist for anxiety disorders: An empirical study with anxiety patients DOI
Turki Alanzi,

Abdulaziz Alharthi,

Sarah N. Alrumman

et al.

Nutrition and Health, Journal Year: 2024, Volume and Issue: unknown

Published: Oct. 7, 2024

this study aims to investigate the role of ChatGPT as a psychotherapist for anxiety disorders, examining its effectiveness, acceptability, and potential benefits among individuals with disorders.

Language: Английский

Citations

2

Generative Artificial Intelligence in Mental Healthcare: An Ethical Evaluation DOI
Charlotte Blease, Adam Rodman

Current Treatment Options in Psychiatry, Journal Year: 2024, Volume and Issue: 12(1)

Published: Dec. 9, 2024

Language: Английский

Citations

2

Pain recognition and pain empathy from a human-centered AI perspective DOI Creative Commons

Siqi Cao,

Di Fu, Yang Xu

et al.

iScience, Journal Year: 2024, Volume and Issue: 27(8), P. 110570 - 110570

Published: July 24, 2024

Sensory and emotional experiences are essential for mental physical well-being, especially within the realm of psychiatry. This article highlights recent advances in cognitive neuroscience, emphasizing significance pain recognition empathic artificial intelligence (AI) healthcare. We provide an overview development process computational neuroscience regarding mechanisms empathy. Through a comprehensive discussion, delves into critical questions such as methodologies AI recognizing from diverse sources information, necessity to exhibit responses, associated advantages obstacles linked with AI. Moreover, insights prospects challenges emphasized relation fostering By delineating potential pathways future research, aims contribute developing effective assistants equipped capabilities, thereby introducing safe meaningful interactions between humans AI, particularly context health

Language: Английский

Citations

1

On a key mission to support academic engagement for trainee and early career psychiatrists: A scientific journal by trainees, for trainees DOI Creative Commons
Asilay Şeker, Filipa Santos Martins, Daniele Cavaleri

et al.

International Journal of Psychiatric Trainees, Journal Year: 2024, Volume and Issue: 2(2)

Published: Dec. 20, 2024

By Asilay Seker, Filipa Santos Martins & 8 more. This editorial focuses on the role of International Journal Psychiatric Trainees in supporting academic engagement early career mental health professionals.

Language: Английский

Citations

0

"It just happened to be the perfect thing": Real-life experiences of generative AI chatbots for mental health DOI

Steven Siddals,

Astrid Coxon, John Torous

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: July 12, 2024

Abstract The global mental health crisis underscores a critical need for accessible and effective interventions. Generative artificial intelligence (AI) chatbots, such as ChatGPT, are emerging novel solution, but research into their real-life usage is limited. We interviewed nineteen individuals about experiences of using generative AI chatbots to work on health. Most participants reported high levels engagement positive impacts, including improved mood, reduced anxiety, healing from trauma loss, relationships. Our analysis resulted in four overarching themes: 1) the value an ‘emotional sanctuary’, i.e., safe, validating space that always available, 2) ‘insightful guidance’ provided, particularly topic relationships, 3) ‘joy connection’ experienced, 4) comparisons between ‘AI therapist’ human therapy. Some these themes echo previous rule-based while others appear be AI. Participants highlighted better approach safety guardrails, more human-like memory ability lead therapeutic process. findings suggest may offer meaningful support, further needed explore effectiveness.

Language: Английский

Citations

0