Overcoming Medical Overuse with Ai Assistance: An Experimental Investigation DOI
Ziyi Wang, Lijia Wei, Lian Xue

et al.

Published: Jan. 1, 2024

Language: Английский

Evaluating Diagnostic Accuracy and Treatment Efficacy in Mental Health: A Comparative Analysis of Large Language Model Tools and Mental Health Professionals DOI Creative Commons
Inbar Levkovich

European Journal of Investigation in Health Psychology and Education, Journal Year: 2025, Volume and Issue: 15(1), P. 9 - 9

Published: Jan. 18, 2025

Large language models (LLMs) offer promising possibilities in mental health, yet their ability to assess disorders and recommend treatments remains underexplored. This quantitative cross-sectional study evaluated four LLMs (Gemini 2.0 Flash Experimental), Claude (Claude 3.5 Sonnet), ChatGPT-3.5, ChatGPT-4) using text vignettes representing conditions such as depression, suicidal ideation, early chronic schizophrenia, social phobia, PTSD. Each model’s diagnostic accuracy, treatment recommendations, predicted outcomes were compared with norms established by health professionals. Findings indicated that for certain conditions, including depression PTSD, like ChatGPT-4 achieved higher accuracy human However, more complex cases, LLM performance varied, achieving only 55% while other professionals performed better. tended suggest a broader range of proactive treatments, whereas recommended targeted psychiatric consultations specific medications. In terms outcome predictions, generally optimistic regarding full recovery, especially treatment, lower recovery rates partial rates, particularly untreated cases. While range, conservative highlight the need professional oversight. provide valuable support diagnostics planning but cannot replace discretion.

Language: Английский

Citations

2

Revolutionizing diagnosis of pulmonary Mycobacterium tuberculosis based on CT: a systematic review of imaging analysis through deep learning DOI Creative Commons
Fei Zhang,

Hui Han,

Minglin Li

et al.

Frontiers in Microbiology, Journal Year: 2025, Volume and Issue: 15

Published: Jan. 8, 2025

The mortality rate associated with Mycobacterium tuberculosis (MTB) has seen a significant rise in regions heavily affected by the disease over past few decades. traditional methods for diagnosing and differentiating (TB) remain thorny issues, particularly areas high TB epidemic inadequate resources. Processing numerous images can be time-consuming tedious. Therefore, there is need automatic segmentation classification technologies based on lung computed tomography (CT) scans to expedite enhance diagnosis of TB, enabling rapid secure identification condition. Deep learning (DL) offers promising solution automatically segmenting classifying CT scans, expediting enhancing diagnosis. This review evaluates diagnostic accuracy DL modalities pulmonary (PTB) after searching PubMed Web Science databases using preferred reporting items systematic reviews meta-analyses (PRISMA) guidelines. Seven articles were found included review. While been widely used achieved great success CT-based PTB diagnosis, are still challenges addressed opportunities explored, including data scarcity, model generalization, interpretability, ethical concerns. Addressing these requires augmentation, interpretable models, moral frameworks, clinical validation. Further research should focus developing robust generalizable establishing guidelines, conducting validation studies. holds promise transforming improving patient outcomes.

Language: Английский

Citations

0

Validating large language models against manual information extraction from case reports of drug-induced parkinsonism in patients with schizophrenia spectrum and mood disorders: a proof of concept study DOI Creative Commons
Sebastian Volkmer,

Abbe R. Gluck,

Andreas Meyer‐Lindenberg

et al.

Schizophrenia, Journal Year: 2025, Volume and Issue: 11(1)

Published: March 20, 2025

Abstract In this proof of concept study, we demonstrated how Large Language Models (LLMs) can automate the conversion unstructured case reports into clinical ratings. By leveraging instructions from a standardized rating scale and evaluating LLM’s confidence in its outputs, aimed to refine prompting strategies enhance reproducibility. Using strategy drug-induced Parkinsonism, showed that LLM-extracted data closely align with rater manual extraction, achieving an accuracy 90%.

Language: Английский

Citations

0

Domain knowledge-guided geological named entities recognition of rock minerals based on prompt engineering with error feedback mechanism DOI
Qinjun Qiu, Yun Ma, Peng Han

et al.

Computers & Geosciences, Journal Year: 2025, Volume and Issue: unknown, P. 105944 - 105944

Published: April 1, 2025

Language: Английский

Citations

0

A Comparative Analysis of Information Gathering by Chatbots, Questionnaires, and Humans in Clinical Pre-Consultation DOI
Brenna Li, Saba Tauseef, Khai N. Truong

et al.

Published: April 24, 2025

Language: Английский

Citations

0

A scoping review of large language models for generative tasks in mental health care DOI Creative Commons
Yining Hua,

Hongbin Na,

Zehan Li

et al.

npj Digital Medicine, Journal Year: 2025, Volume and Issue: 8(1)

Published: April 30, 2025

Large language models (LLMs) show promise in mental health care for handling human-like conversations, but their effectiveness remains uncertain. This scoping review synthesizes existing research on LLM applications care, reviews model performance and clinical effectiveness, identifies gaps current evaluation methods following a structured framework, provides recommendations future development. A systematic search identified 726 unique articles, of which 16 met the inclusion criteria. These studies, encompassing such as assistance, counseling, therapy, emotional support, initial promises. However, were often non-standardized, with most studies relying ad-hoc scales that limit comparability robustness. reliance prompt-tuning proprietary models, OpenAI's GPT series, also raises concerns about transparency reproducibility. As evidence does not fully support use standalone interventions, more rigorous development guidelines are needed safe, effective integration.

Language: Английский

Citations

0

Governance and Ethics in the Use of Artificial Intelligence in Health DOI
Patricia Gesser da Costa, Andréia de Bem Machado, Inara Antunes Vieira Willerding

et al.

IGI Global eBooks, Journal Year: 2025, Volume and Issue: unknown, P. 377 - 392

Published: March 7, 2025

Governance and ethics in the use of artificial intelligence (AI) healthcare pose fundamental questions to ensure that emerging technologies are implemented a safe, fair, transparent manner. The increasing AI sector offers great opportunities improve patient care, optimize treatments health protocols, increase efficiency products services, but it also raises significant challenges regarding privacy, equity, transparency, accountability. This article aims answer research question: “How important is adopt governance practices ethical precepts systems healthcare?” Through an integrative literature review Scopus Web Science databases, 11 studies were selected present, different ways, solutions for technological innovations benefit patients, professionals, society fair safe

Language: Английский

Citations

0

Using large language models as decision support tools in emergency ophthalmology DOI
Ante Krešo, Zvonimir Boban,

Sime Kabic

et al.

International Journal of Medical Informatics, Journal Year: 2025, Volume and Issue: unknown, P. 105886 - 105886

Published: March 1, 2025

Language: Английский

Citations

0

Dall-E in hand surgery: Exploring the utility of ChatGPT image generation DOI Creative Commons
Daniel Soroudi, Daniel S. Rouhani,

Alap U. Patel

et al.

Surgery Open Science, Journal Year: 2025, Volume and Issue: unknown

Published: May 1, 2025

Language: Английский

Citations

0

Overcoming Medical Overuse with AI Assistance: An Experimental Investigation DOI
Ziyi Wang, Lijia Wei, Lian Xue

et al.

SSRN Electronic Journal, Journal Year: 2024, Volume and Issue: unknown

Published: Jan. 1, 2024

This study evaluates the effectiveness of Artificial Intelligence (AI) in mitigating medical overtreatment, a significant issue characterized by unnecessary interventions that inflate healthcare costs and pose risks to patients. We conducted lab-in-the-field experiment at school, utilizing novel prescription task, manipulating monetary incentives availability AI assistance among students using three-by-two factorial design. tested three incentive schemes: Flat (constant pay regardless treatment quantity), Progressive (pay increases with number treatments), Regressive (penalties for overtreatment) assess their influence on adoption assistance. Our findings demonstrate significantly reduced overtreatment rates—by up 62% conditions where (prospective) physician patient interests were most aligned. Diagnostic accuracy improved 17% 37%, depending scheme. Adoption advice was high, approximately half participants modifying decisions based input across all settings. For policy implications, we quantified (57%) non-monetary (43%) highlighted AI's potential mitigate enhance social welfare. results provide valuable insights administrators considering integration into systems.

Language: Английский

Citations

0