The Goldilocks Zone: Finding the right balance of user and institutional risk for suicide-related generative AI queries DOI Creative Commons
Anna Van Meter, Michael G. Wheaton, Victoria E. Cosgrove

и другие.

PLOS Digital Health, Год журнала: 2025, Номер 4(1), С. e0000711 - e0000711

Опубликована: Янв. 8, 2025

Generative artificial intelligence (genAI) has potential to improve healthcare by reducing clinician burden and expanding services, among other uses. There is a significant gap between the need for mental health care available clinicians in United States–this makes it an attractive target improved efficiency through genAI. Among most sensitive topics suicide, demand crisis intervention grown recent years. We aimed evaluate quality of genAI tool responses suicide-related queries. entered 10 queries into five tools–ChatGPT 3.5, GPT-4, version GPT-4 safe protected information, Gemini, Bing Copilot. The response each query was coded on seven metrics including presence suicide hotline number, content related evidence-based interventions, supportive content, harmful content. Pooling across tools, (79%) were supportive. Only 24% included number only 4% consistent with prevention interventions. Harmful rare (5%); all such instances delivered Our results suggest that developers have taken very conservative approach constrained their models’ support-seeking, but little else. Finding balance providing much needed information without introducing excessive risk within capabilities developers. At this nascent stage integrating tools systems, ensuring parity should be goal organizations.

Язык: Английский

The Goldilocks Zone: Finding the right balance of user and institutional risk for suicide-related generative AI queries DOI Creative Commons
Anna Van Meter, Michael G. Wheaton, Victoria E. Cosgrove

и другие.

PLOS Digital Health, Год журнала: 2025, Номер 4(1), С. e0000711 - e0000711

Опубликована: Янв. 8, 2025

Generative artificial intelligence (genAI) has potential to improve healthcare by reducing clinician burden and expanding services, among other uses. There is a significant gap between the need for mental health care available clinicians in United States–this makes it an attractive target improved efficiency through genAI. Among most sensitive topics suicide, demand crisis intervention grown recent years. We aimed evaluate quality of genAI tool responses suicide-related queries. entered 10 queries into five tools–ChatGPT 3.5, GPT-4, version GPT-4 safe protected information, Gemini, Bing Copilot. The response each query was coded on seven metrics including presence suicide hotline number, content related evidence-based interventions, supportive content, harmful content. Pooling across tools, (79%) were supportive. Only 24% included number only 4% consistent with prevention interventions. Harmful rare (5%); all such instances delivered Our results suggest that developers have taken very conservative approach constrained their models’ support-seeking, but little else. Finding balance providing much needed information without introducing excessive risk within capabilities developers. At this nascent stage integrating tools systems, ensuring parity should be goal organizations.

Язык: Английский

Процитировано

0