Prompt Engineering an Informational Chatbot for Educating about Mental Health: Utilizing a Multi-Agent Approach for Enhanced Compliance with Prompt Instruction (Preprint) DOI Creative Commons
Per Niklas Waaler,

Musarrat Hussain,

И. Н. Молчанов

et al.

JMIR AI, Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 9, 2024

People with schizophrenia often present cognitive impairments that may hinder their ability to learn about condition. Education platforms powered by Large Language Models (LLMs) have the potential improve accessibility of mental health information. However, black-box nature LLMs raises ethical and safety concerns regarding controllability over chatbots. In particular, prompt-engineered chatbots drift from intended role as conversation progresses become more prone hallucinations. To develop evaluate a Critical Analysis Filter (CAF) system ensures an LLM-powered chatbot reliably complies predefined its instructions scope while delivering validated For proof-of-concept, we educational GPT-4 can dynamically access information manual written for people caregivers. CAF, team LLM agents are used critically analyze refine chatbot's responses deliver real-time feedback chatbot. assess CAF re-establish adherence instructions, generate three conversations (by conversing disabled) wherein starts towards various unintended roles. We use these checkpoint initialize automated between adversarial designed entice it Conversations were repeatedly sampled enabled disabled respectively. Three human raters independently rated each response according criteria developed measure integrity; specifically, transparency (such admitting when statement lacks explicit support scripted sources) tendency faithfully convey in manual. total, 36 (3 different conversations, 3 per checkpoint, 4 queries conversation) compliance Activating resulted score was considered acceptable (≥2) 67.0% responses, compared only 8.7% deactivated. Although rigorous testing realistic scenarios is needed, our results suggest self-reflection mechanisms could enable be effectively safely platforms. This approach harnesses flexibility constraining appropriate accurate interactions.

Language: Английский

Harnessing Large Language Models for Identification and Treatment of Obsessive-Compulsive Disorder DOI Open Access
Inbar Levkovich

Published: June 13, 2024

Obsessive-Compulsive Disorder (OCD) is a mental health condition marked by recurrent intrusive thoughts or sensations that compel individuals to perform repetitive behaviors acts. Obsessions and compulsions significantly disrupt daily life cause considerable distress. Early identification intervention improve long-term outcomes. This study aimed evaluate the ability of four advanced artificial intelligence models (ChatGPT-3.5, ChatGPT-4, Claude, Bard) accurately recognize OCD compared human professionals assess recommended therapies stigma attributions. was conducted during March 2024 utilizing 12 vi-gnettes. Each vignette depicted client, either young adult middle-aged male female, attending an initial therapy session. evaluated ten times, resulting in 480 evaluations. The results were with those sample 514 psychotherapists, as reported Canavan. Significant differences found. AI demonstrated higher recognition rates confidence levels than showed 100% recognition, 87% among psychotherapists. also evi-dence-based interventions more frequently, ChatGPT-3.5 Claude at 100%, ChatGPT-4 90%, Bard 60%, 61.9% Additionally, ex-hibited lower danger estimations, though both psychotherapists high willingness treat described cases. findings suggest surpass recognizing recommending evidence-based treatments while demonstrating stigma. These highlight potential tools enhance diagnosis treatment clinical settings.

Language: Английский

Citations

2

Transforming Perceptions: Exploring the Multifaceted Potential of Generative AI for People with Cognitive Disabilities (Preprint) DOI Creative Commons
Dorit Hadar‐Shoval, Yuval Haber, Amir Tal

et al.

JMIR Neurotechnology, Journal Year: 2024, Volume and Issue: unknown

Published: July 10, 2024

Language: Английский

Citations

2

Using GenAI to train mental health professionals in suicide risk assessment: Preliminary findings DOI
Zohar Elyoseph,

Inbar Levkovitch,

Yuval Haber

et al.

medRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: July 17, 2024

Background Suicide risk assessment is a critical skill for mental health professionals (MHPs), yet traditional training in this area often limited. This study examined the potential of generative artificial intelligence (GenAI)-based simulator to enhance self- efficacy suicide among MHPs. Method A quasi-experimental was conducted with 43 MHPs from Israel. Participants attended an online seminar and interacted GenAI-powered simulator. They completed pre- post-intervention questionnaires measuring self-efficacy willingness treat suicidal patients. Qualitative data on user experience were collected. Results We found significant increase scores following intervention. Willingness patients presenting increased slightly but did not reach significance. feedback indicated that participants engaging valuable professional development. However, raised concerns about over-reliance AI need human supervision during training. Conclusion preliminary suggests GenAI-based simulators hold promise as tool MHPs’ competence assessment. further research larger samples control groups needed confirm these findings address ethical considerations surrounding use AI-powered simulation tools have democratize access high-quality health, potentially contributing global prevention efforts. their implementation should be carefully considered ensure they complement rather than replace expertise.

Language: Английский

Citations

1

Editorial: Responsible Design, Integration, and Use of Generative AI in Mental Health (Preprint) DOI Creative Commons
Oren Asman, John Torous, Amir Tal

et al.

Published: Dec. 21, 2024

BACKGROUND Editorial OBJECTIVE METHODS RESULTS CONCLUSIONS CLINICALTRIAL

Language: Английский

Citations

0

Prompt Engineering an Informational Chatbot for Educating about Mental Health: Utilizing a Multi-Agent Approach for Enhanced Compliance with Prompt Instruction (Preprint) DOI Creative Commons
Per Niklas Waaler,

Musarrat Hussain,

И. Н. Молчанов

et al.

JMIR AI, Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 9, 2024

People with schizophrenia often present cognitive impairments that may hinder their ability to learn about condition. Education platforms powered by Large Language Models (LLMs) have the potential improve accessibility of mental health information. However, black-box nature LLMs raises ethical and safety concerns regarding controllability over chatbots. In particular, prompt-engineered chatbots drift from intended role as conversation progresses become more prone hallucinations. To develop evaluate a Critical Analysis Filter (CAF) system ensures an LLM-powered chatbot reliably complies predefined its instructions scope while delivering validated For proof-of-concept, we educational GPT-4 can dynamically access information manual written for people caregivers. CAF, team LLM agents are used critically analyze refine chatbot's responses deliver real-time feedback chatbot. assess CAF re-establish adherence instructions, generate three conversations (by conversing disabled) wherein starts towards various unintended roles. We use these checkpoint initialize automated between adversarial designed entice it Conversations were repeatedly sampled enabled disabled respectively. Three human raters independently rated each response according criteria developed measure integrity; specifically, transparency (such admitting when statement lacks explicit support scripted sources) tendency faithfully convey in manual. total, 36 (3 different conversations, 3 per checkpoint, 4 queries conversation) compliance Activating resulted score was considered acceptable (≥2) 67.0% responses, compared only 8.7% deactivated. Although rigorous testing realistic scenarios is needed, our results suggest self-reflection mechanisms could enable be effectively safely platforms. This approach harnesses flexibility constraining appropriate accurate interactions.

Language: Английский

Citations

0