AI depictions of psychiatric diagnoses: a preliminary study of generative image outputs in Midjourney V.6 and DALL-E 3 DOI Creative Commons
Matthew Flathers, G. Richard Smith, Ellen Wagner

и другие.

BMJ Mental Health, Год журнала: 2024, Номер 27(1), С. e301298 - e301298

Опубликована: Дек. 1, 2024

Objective This paper investigates how state-of-the-art generative artificial intelligence (AI) image models represent common psychiatric diagnoses. We offer key lessons derived from these representations to inform clinicians, researchers, AI companies, policymakers and the public about potential impacts of AI-generated imagery on mental health discourse. Methods prompted two models, Midjourney V.6 DALL-E 3 with isolated diagnostic terms for conditions. The resulting images were compiled presented as examples current behaviour when interpreting terminology. Findings generated outputs most diagnosis prompts. These frequently reflected cultural stereotypes historical visual tropes including gender biases stigmatising portrayals certain Discussion findings illustrate three points. First, reflect perceptions disorders rather than evidence-based clinical ones. Second, resurface archetypes. Third, dynamic nature necessitates ongoing monitoring proactive engagement manage evolving biases. Addressing challenges requires a collaborative effort among developers ensure responsible use technologies in contexts. Clinical implications As become increasingly accessible, it is crucial professionals understand AI’s capabilities, limitations impacts. Future research should focus quantifying biases, assessing their effects perception developing strategies mitigate harm while leveraging insights provide into collective understandings illness.

Язык: Английский

Chain of Risks Evaluation (CORE): A framework for safer large language models in public mental health DOI Open Access
Lingyu Li,

Shuqi Kong,

Haiquan Zhao

и другие.

Psychiatry and Clinical Neurosciences, Год журнала: 2025, Номер unknown

Опубликована: Янв. 24, 2025

Large language models (LLMs) have gained significant attention for their capabilities in natural understanding and generation. However, widespread adoption potentially raises public mental health concerns, including issues related to inequity, stigma, dependence, medical risks, security threats. This review aims offer a perspective within the actor‐network framework, exploring technical architectures, linguistic dynamics, psychological effects underlying human‐LLMs interactions. Based on this theoretical foundation, we propose four categories of presenting increasing challenges identification mitigation: universal, context‐specific, user‐specific, user‐context‐specific risks. Correspondingly, introduce CORE: Chain Risk Evaluation, structured conceptual framework assessing mitigating risks associated with LLMs contexts. Our approach suggests viewing development responsible as continuum from efforts. We summarize approaches potential contributions practitioners that could help evaluate regulate play crucial role emerging field by collaborating developers, conducting empirical studies better understand impacts interactions, developing guidelines use contexts, engaging education.

Язык: Английский

Процитировано

0

Limitations of the LLM-as-a-Judge Approach for Evaluating LLM Outputs in Expert Knowledge Tasks DOI

Annalisa Szymanski,

Noah Ziems,

Heather A. Eicher‐Miller

и другие.

Опубликована: Март 19, 2025

Язык: Английский

Процитировано

0

Adapting Large Language Models to Biomedical Domain: A Survey of Techniques and Approaches DOI

Jaafer Klila,

Sondes Bannour, Rahma Boujelbane

и другие.

Lecture notes in networks and systems, Год журнала: 2025, Номер unknown, С. 155 - 163

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Utilizing Large Language Models Enhanced by Chain-of-Thought for the Diagnosis of Typical Medical Cases DOI
Jiqiang Liu, Chenyang Liu

Communications in computer and information science, Год журнала: 2025, Номер unknown, С. 177 - 190

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Human guided empathetic AI agent for mental health support leveraging reinforcement learning-enhanced retrieval-augmented generation DOI

Gayathri Soman,

M. V. Judy,

Aadhil Muhammad Abou

и другие.

Cognitive Systems Research, Год журнала: 2025, Номер unknown, С. 101337 - 101337

Опубликована: Фев. 1, 2025

Язык: Английский

Процитировано

0

Assisting Screening of Pediatric Depression with Large Language Models as Symptom Extractors: Pilot Study (Preprint) DOI

Mariia Ignashina,

Paulina Bondaronek,

Dan Santel

и другие.

Опубликована: Янв. 28, 2025

BACKGROUND Depression is rising among people aged 10–24. Traditional depression screening methods, such as the PHQ-9, are particularly challenging for children. AI has potential to help but scarcity of annotated datasets highlights need zero-shot approaches. In this work, we investigate feasibility state-of-the-art Large Language Models (LLMs) depressive symptom extraction in pediatric settings. This approach aims complement traditional screening. OBJECTIVE The key objectives were to: 1) Assess LLMs identifying symptoms free-text clinical notes populations, 2) Benchmark performance leading LLM models extracting PHQ-9 symptom-related information, 3) Demonstrate value LLM-driven evidence improve mental health using an example interpretable AI-based tool. METHODS We examined free text EHRs patients with diagnosis or related mood disorders (age groups 6-24, 1.8K patients) from Cincinnati Children's Hospital Medical Center. noticed drastic inconsistencies application and documentation highlighting difficulty obtaining comprehensive diagnostic data these conditions. manually 22 16 depression-related categories. leveraged combination Beck's Inventory (BDI) develop tailored categories specifically suited symptoms. then applied three (FLAN T5, Llama Phi) automate identification RESULTS Our findings show that all 60% more efficient than word match Flan precision (average F1: 0.65, precision: 0.78), excelling rare like "sleep problems" (F1 0.92) "self-loathing" 0.8). Phi strikes a balance between (0.44) recall (0.60). 3 highest (0.90), overgeneralizes less suitable. Challenges include complexity overgeneralization scores. finally demonstrate utility annotations provided by features ML algorithm which differentiates cases controls high 0.78 major boost compared baseline not features. CONCLUSIONS study strengths addressing heterogeneity precision. computational efficiency FLAN-T5 further supports its deployment resource-limited constrained age group, requiring validation broader populations other demonstrates enhance screening, consistency, provide tool clinicians.

Язык: Английский

Процитировано

0

Question-based Computational Language Approach Outperform Ratings Scale in Discriminating Between Anxiety and Depression DOI

Mona Tabesh,

Mariam Marlen Mirström,

Rebecca Astrid Böhme

и другие.

Journal of Anxiety Disorders, Год журнала: 2025, Номер unknown, С. 103020 - 103020

Опубликована: Апрель 1, 2025

Язык: Английский

Процитировано

0

Describing the Framework for AI Tool Assessment in Mental Health and Applying It to a Generative AI Obsessive-Compulsive Disorder Platform: Tutorial DOI Creative Commons

Ashleigh Golden,

Elias Aboujaoude

JMIR Formative Research, Год журнала: 2024, Номер 8, С. e62963 - e62963

Опубликована: Окт. 18, 2024

As artificial intelligence (AI) technologies occupy a bigger role in psychiatric and psychological care become the object of increased research attention, industry investment, public scrutiny, tools for evaluating their clinical, ethical, user-centricity standards have essential. In this paper, we first review history rating systems used to evaluate AI mental health interventions. We then describe recently introduced Framework Tool Assessment Mental Health (FAITA-Mental Health), whose scoring system allows users grade platforms on key domains, including credibility, user experience, crisis management, agency, equity, transparency. Finally, demonstrate use FAITA-Mental scale by systematically applying it OCD Coach, generative tool readily available ChatGPT store designed help manage symptoms obsessive-compulsive disorder. The results offer insights into utility limitations when applied “real-world” space, suggesting that framework effectively identifies strengths gaps AI-driven tools, particularly areas such as acute management. also highlight need stringent guide integration manner is not only effective but safe protective users’ rights welfare.

Язык: Английский

Процитировано

2

Improving Workplace Well-being in Modern Organizations: A Review of Large Language Model-based Mental Health Chatbots DOI
Aijia Yuan, Edlin Garcia Colato, Bernice A. Pescosolido

и другие.

ACM Transactions on Management Information Systems, Год журнала: 2024, Номер 16(1), С. 1 - 26

Опубликована: Окт. 22, 2024

The global rise in mental disorders, particularly workplaces, necessitated innovative and scalable solutions for delivering therapy. Large Language Model (LLM)-based health chatbots have rapidly emerged as a promising tool overcoming the time, cost, accessibility constraints often associated with traditional However, LLM-based are their nascency, significant opportunities to enhance capabilities operate within organizational contexts. To this end, research seeks examine role development of LLMs over past half-decade. Through our review, we identified 50 health-related chatbots, including 22 models targeting general health, depression, anxiety, stress, suicide ideation. These primarily used emotional support guidance but lack specifically designed workplace where such issues increasingly prevalent. review covers development, applications, evaluation, ethical concerns, integration services, LLM-as-a-Service, various other business implications settings. We provide illustration how approaches could overcome limitations also offer system that help facilitate systematic evaluation chatbots. suggestions future tailored needs.

Язык: Английский

Процитировано

2

Zero-Shot Ensemble of Language Models for Fine-Grain Mental-Health Topic Classification DOI
Cristina Luna-Jiménez, David Griol, Zoraida Callejas

и другие.

Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 88 - 97

Опубликована: Янв. 1, 2024

Язык: Английский

Процитировано

1