Assisting Screening of Pediatric Depression with Large Language Models as Symptom Extractors: Pilot Study (Preprint) DOI

Mariia Ignashina,

Paulina Bondaronek,

Dan Santel

et al.

Published: Jan. 28, 2025

BACKGROUND Depression is rising among people aged 10–24. Traditional depression screening methods, such as the PHQ-9, are particularly challenging for children. AI has potential to help but scarcity of annotated datasets highlights need zero-shot approaches. In this work, we investigate feasibility state-of-the-art Large Language Models (LLMs) depressive symptom extraction in pediatric settings. This approach aims complement traditional screening. OBJECTIVE The key objectives were to: 1) Assess LLMs identifying symptoms free-text clinical notes populations, 2) Benchmark performance leading LLM models extracting PHQ-9 symptom-related information, 3) Demonstrate value LLM-driven evidence improve mental health using an example interpretable AI-based tool. METHODS We examined free text EHRs patients with diagnosis or related mood disorders (age groups 6-24, 1.8K patients) from Cincinnati Children's Hospital Medical Center. noticed drastic inconsistencies application and documentation highlighting difficulty obtaining comprehensive diagnostic data these conditions. manually 22 16 depression-related categories. leveraged combination Beck's Inventory (BDI) develop tailored categories specifically suited symptoms. then applied three (FLAN T5, Llama Phi) automate identification RESULTS Our findings show that all 60% more efficient than word match Flan precision (average F1: 0.65, precision: 0.78), excelling rare like "sleep problems" (F1 0.92) "self-loathing" 0.8). Phi strikes a balance between (0.44) recall (0.60). 3 highest (0.90), overgeneralizes less suitable. Challenges include complexity overgeneralization scores. finally demonstrate utility annotations provided by features ML algorithm which differentiates cases controls high 0.78 major boost compared baseline not features. CONCLUSIONS study strengths addressing heterogeneity precision. computational efficiency FLAN-T5 further supports its deployment resource-limited constrained age group, requiring validation broader populations other demonstrates enhance screening, consistency, provide tool clinicians.

Language: Английский

Applications of large language models in psychiatry: a systematic review DOI Creative Commons
Mahmud Omar, Shelly Soffer, Alexander W. Charney

et al.

Frontiers in Psychiatry, Journal Year: 2024, Volume and Issue: 15

Published: June 24, 2024

Background With their unmatched ability to interpret and engage with human language context, large models (LLMs) hint at the potential bridge AI cognitive processes. This review explores current application of LLMs, such as ChatGPT, in field psychiatry. Methods We followed PRISMA guidelines searched through PubMed, Embase, Web Science, Scopus, up until March 2024. Results From 771 retrieved articles, we included 16 that directly examine LLMs’ use particularly ChatGPT GPT-4, showed diverse applications clinical reasoning, social media, education within They can assist diagnosing mental health issues, managing depression, evaluating suicide risk, supporting field. However, our also points out limitations, difficulties complex cases underestimation risks. Conclusion Early research psychiatry reveals versatile applications, from diagnostic support educational roles. Given rapid pace advancement, future investigations are poised explore extent which these might redefine traditional roles care.

Language: Английский

Citations

12

ChatGPT: A Pilot Study on a Promising Tool for Mental Health Support in Psychiatric Inpatient Care DOI Creative Commons
Antônio Alves de Melo, Inês Canelas da Silva, Joana Lopes

et al.

International Journal of Psychiatric Trainees, Journal Year: 2024, Volume and Issue: unknown

Published: Feb. 9, 2024

Introduction This pilot study assesses ChatGPT’s effectiveness as an artificial intelligence (AI) chatbot in psychiatric inpatient care. Global mental health challenges highlight a significant treatment gap, mainly due to restricted service access and professional shortages. AI chatbots like ChatGPT offer innovative solutions, providing services such self-help advice, coaching, psychoeducation, emotional support. Methods involved group of patients receiving The intervention engaged 3-6 sessions under guided prompts, while the control received standard primary outcome was based on World Health Organization Quality Life Questionnaire – Brief Version (WHOQOL-BREF) scores, secondary assessed patient satisfaction with ChatGPT. Results Twelve were included this study, mean age 27 (standard deviation 8.57). (7 patients) showed notable improvements WHOQOL-BREF scores compared (5 groups), high levels reported. Discussion These findings suggest that can enhance patient-reported quality life setting, user satisfaction. However, limitations include small sample size exclusion psychosis. Future studies should focus larger, diverse groups for broader validation. results support potential care, which provide more accessible varied options. lays groundwork further exploration into role enhancing treatment, advocating larger-scale investigations establish conclusive evidence their applicability scenarios.

Language: Английский

Citations

9

Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review DOI Creative Commons
Mehrdad Rahsepar Meadi, Tomas Sillekens, Suzanne Metselaar

et al.

JMIR Mental Health, Journal Year: 2025, Volume and Issue: 12, P. e60432 - e60432

Published: Feb. 21, 2025

Background Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns. Objective We aimed to provide comprehensive overview of considerations surrounding therapist individuals with issues. Methods conducted systematic search across PubMed, Embase, APA PsycINFO, Web Science, Scopus, the Philosopher’s Index, and ACM Digital Library databases. Our comprised 3 elements: embodied intelligence, ethics, health. defined conversational agent that interacts person uses formulate output. included articles discussing challenges functioning role added additional through snowball searching. English or Dutch. All types were considered except abstracts symposia. Screening eligibility was done by 2 independent researchers (MRM TS AvB). An initial charting form created based on expected revised complemented during process. The divided into themes. When concern occurred more than articles, we identified it distinct theme. Results 101 which 95% (n=96) published 2018 later. Most reviews (n=22, 21.8%) followed commentaries (n=17, 16.8%). following 10 themes distinguished: (1) safety harm (discussed 52/101, 51.5% articles); most common topics within this theme suicidality crisis management, harmful wrong suggestions, risk dependency CAI; (2) explicability, transparency, trust (n=26, 25.7%), including effects “black box” algorithms trust; (3) responsibility accountability (n=31, 30.7%); (4) empathy humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), inequalities due differences literacy; (6) anthropomorphization deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy confidentiality (n=62, 61.4%); (10) concerns care workers’ jobs (n=16, 15.8%). Other discussed 9.9% (n=10) articles. Conclusions scoping review has comprehensively covered aspects While certain remain underexplored stakeholders’ perspectives insufficiently represented, study highlights critical areas further research. These include evaluating risks benefits comparison human therapists, determining its appropriate roles therapeutic contexts impact access, addressing accountability. Addressing these gaps can inform normative analysis guide development guidelines responsible

Language: Английский

Citations

1

AI as the Therapist: Student Insights on the Challenges of Using Generative AI for School Mental Health Frameworks DOI Creative Commons
Cecilia Ka Yuk Chan

Behavioral Sciences, Journal Year: 2025, Volume and Issue: 15(3), P. 287 - 287

Published: Feb. 28, 2025

The integration of generative AI (GenAI) in school-based mental health services presents new opportunities and challenges. This study focuses on the challenges using GenAI chatbots as therapeutic tools by exploring secondary school students’ perceptions such applications. data were collected from students who had both theoretical practical experience with GenAI. Based Grodniewicz Hohol’s framework highlighting “Problem a Confused Therapist”, Non-human Narrowly Intelligent qualitative student reflections examined thematic analysis. findings revealed that while acknowledged AI’s benefits, accessibility non-judgemental feedback, they expressed significant concerns about lack empathy, trust, adaptability. implications underscore need for chatbot use to be complemented in-person counselling, emphasising importance human oversight AI-augmented care. contributes deeper understanding how advanced can ethically effectively incorporated into frameworks, balancing technological potential essential interaction.

Language: Английский

Citations

1

Exploring the efficacy and potential of large language models for depression: A systematic review DOI
Mahmud Omar, Inbar Levkovich

Journal of Affective Disorders, Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 1, 2024

Language: Английский

Citations

8

The Impact of Artificial Intelligence on Human Sexuality: A Five-Year Literature Review 2020–2024 DOI Creative Commons
Nicola Döring,

Thuy Dung Le,

Laura M. Vowels

et al.

Current Sexual Health Reports, Journal Year: 2024, Volume and Issue: 17(1)

Published: Dec. 4, 2024

Language: Английский

Citations

4

Understanding Attitudes and Trust of Generative AI Chatbots for Social Anxiety Support DOI
Yimeng Wang, Yinzhou Wang,

Kelly Crace

et al.

Published: April 24, 2025

Social anxiety (SA) has become increasingly prevalent. Traditional coping strategies often face accessibility challenges. Generative AI (GenAI), known for their knowledgeable and conversational capabilities, are emerging as alternative tools mental well-being. With the increased integration of GenAI, it is important to examine individuals' attitudes trust in GenAI chatbots' support SA. Through a mixed-method approach that involved surveys (n = 159) interviews 17), we found individuals with severe symptoms tended embrace chatbots more readily, valuing non-judgmental perceived emotional comprehension. However, those milder prioritized technical reliability. We identified factors influencing trust, such ability generate empathetic responses its context-sensitive limitations, which were particularly among also discuss design implications use fostering cognitive practical considerations.

Language: Английский

Citations

0

Developing Mental Health Support Chatbots in India: Challenges and Insights DOI Creative Commons
Susmita Halder

Annals of Indian Psychiatry, Journal Year: 2025, Volume and Issue: 9(1), P. 99 - 101

Published: Jan. 1, 2025

Abstract Chatbots represent a new perspective in mental health support. The increased use of technology has become preferred medium for providing services, making it easier to reach larger population. However, India, the chatbots support is still its infancy. There are many reasons behind this. purpose this article explore and identify challenges faced by professionals developing examine their future potential.

Language: Английский

Citations

0

Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop DOI Creative Commons
Hye Sun Lee, Colton Wright, Julia Ferranto

et al.

Frontiers in Psychiatry, Journal Year: 2025, Volume and Issue: 15

Published: Jan. 31, 2025

Digital mental health interventions, such as artificial intelligence (AI) conversational agents, hold promise for improving access to care by innovating therapy and supporting delivery. However, little research exists on patient perspectives regarding AI which is crucial their successful implementation. This study aimed fill the gap exploring patients' perceptions acceptability of agents in healthcare. Adults with self-reported mild moderate anxiety were recruited from UMass Memorial Health system. Participants engaged semi-structured interviews discuss experiences, perceptions, Anxiety levels assessed using Generalized Disorder scale. Data collected December 2022 February 2023, three researchers conducted rapid qualitative analysis identify synthesize themes. The sample included 29 adults (ages 19-66), predominantly under age 35, non-Hispanic, White, female. reported a range positive negative experiences agents. Most held attitudes towards appreciating utility potential increase care, yet some also expressed cautious optimism. About half endorsed opinions, citing AI's lack empathy, technical limitations addressing complex situations, data privacy concerns. participants desired human involvement AI-driven concern about risk being seen replacements therapy. A subgroup preferred administrative tasks rather than provision. perceived useful beneficial increasing but concerns capabilities, safety, healthcare prevalent. Future implementation integration should consider enhance effectiveness.

Language: Английский

Citations

0

The externalization of internal experiences in psychotherapy through generative artificial intelligence: a theoretical, clinical, and ethical analysis DOI Creative Commons
Yuval Haber,

Dorit Hadar Shoval,

Inbar Levkovich

et al.

Frontiers in Digital Health, Journal Year: 2025, Volume and Issue: 7

Published: Feb. 4, 2025

Introduction Externalization techniques are well established in psychotherapy approaches, including narrative therapy and cognitive behavioral therapy. These methods elicit internal experiences such as emotions make them tangible through external representations. Recent advances generative artificial intelligence (GenAI), specifically large language models (LLMs), present new possibilities for therapeutic interventions; however, their integration into core practices remains largely unexplored. This study aimed to examine the clinical, ethical, theoretical implications of integrating GenAI space a proof-of-concept (POC) AI-driven externalization techniques, while emphasizing essential role human therapist. Methods To this end, we developed two customized GPTs agents: VIVI (visual externalization), which uses DALL-E 3 create images reflecting patients' (e.g., depression or hope), DIVI (dialogic role-play-based simulates conversations with aspects content. tools were implemented evaluated clinical case under professional psychological guidance. Results The demonstrated that can serve an “artificial third”, creating Winnicottian playful enhances, rather than supplants, dyadic therapist-patient relationship. successfully externalized complex dynamics, offering avenues, also revealing challenges empathic failures cultural biases. Discussion findings highlight both promise ethical complexities AI-enhanced therapy, concerns about data security, representation accuracy, balance authority. address these challenges, propose SAFE-AI protocol, clinicians structured guidelines responsible AI Future research should systematically evaluate generalizability, efficacy, across diverse populations contexts.

Language: Английский

Citations

0