The AI‐augmented clinician: Are we ready? DOI Open Access
Bernard Dan

Developmental Medicine & Child Neurology, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 22, 2025

Language: Английский

Unregulated large language models produce medical device-like output DOI Creative Commons
Gary E. Weissman,

Toni Mankowitz,

Genevieve P. Kanter

et al.

npj Digital Medicine, Journal Year: 2025, Volume and Issue: 8(1)

Published: March 7, 2025

Large language models (LLMs) show considerable promise for clinical decision support (CDS) but none is currently authorized by the Food and Drug Administration (FDA) as a CDS device. We evaluated whether two popular LLMs could be induced to provide device-like output. found that LLM output readily produced across range of scenarios, suggesting need regulation if are formally deployed use.

Language: Английский

Citations

1

General Practitioners’ Experiences with Generative Artificial Intelligence in the UK: An Online Survey DOI Creative Commons
Charlotte Blease, Josefin Hagström, Carolina Garcia Sanchez

et al.

Research Square (Research Square), Journal Year: 2025, Volume and Issue: unknown

Published: March 12, 2025

Abstract Background Following the launch of ChatGPT in November 2022, interest large language model-powered chatbots has soared with increasing focus on clinical potential these tools. Building a previous survey conducted 2024, we sought to gauge general practitioners’ (GPs) adoption this new generation assist any aspect practice UK. Methods An online was disseminated January 2025 non-probability sample GPs registered clinician marketing platform Doctors.net.uk. The research as part scheduled monthly 'omnibus survey,' designed achieve fixed size 1,000 participants. Results Of 1,005 respondents, 50% respondents were men, 54% 46 years or older. 25% reported using generative artificial intelligence (GenAI) tools practice; these, 35% generate documentation after patient appointments, 27% suggest differential diagnosis, 24 % for treatment options, and 24% referrals. 249 who used AI tasks, 71% said that reduced work burdens. In last 12 months, 85% their employer had not encouraged them use GenAI tools, but only 3% prohibited from work; 95% they no professional training work. Conclusions This suggests doctors’ may be growing Findings UK benefit especially administrative tasks reasoning support, adopting them, most users decrease Continued absence remains concern.

Language: Английский

Citations

1

Augmenting Community Nursing Practice With Generative AI: A Formative Study of Diagnostic Synergies Using Simulation-Based Clinical Cases DOI Creative Commons
Odelyah Saad, Mor Saban,

Erika Kerner

et al.

Journal of Primary Care & Community Health, Journal Year: 2025, Volume and Issue: 16

Published: March 1, 2025

Objective: To compare the diagnostic accuracy and clinical decision-making of experienced community nurses versus state-of-the-art generative AI (GenAI) systems for simulated patient case scenarios. Methods: In months 5 to 6/2024, 114 Israeli completed a questionnaire including 4 medical studies. Responses were also collected from 3 GenAI models (ChatGPT-4, Claude 3.0, Gemini 1.5), analyzed both without word limits with 10-word constraint. scored on accuracy, speed, comprehensiveness. Results: Nurses higher average compared shortened responses. responses faster but more verbose, contained unnecessary information. (full version) achieved highest among models. Conclusions: While shows potential support aspects nursing practice, human clinicians currently exhibit advantages in holistic reasoning abilities, skill requiring experience, contextual knowledge, ability bring concise practical Further research is needed before can adequately substitute expertise.

Language: Английский

Citations

1

Large Language Models—Misdiagnosing Diagnostic Excellence? DOI Creative Commons

Sumant R Ranji

JAMA Network Open, Journal Year: 2024, Volume and Issue: 7(10), P. e2440901 - e2440901

Published: Oct. 28, 2024

Citations

4

AI in medicine: preparing for the future while preserving what matters DOI Open Access

Raj Mehta,

Michael E. Johansen

BMJ, Journal Year: 2025, Volume and Issue: unknown, P. r27 - r27

Published: Jan. 7, 2025

2025 is here and medicine has continued to move away from the utopian vision of our admission essays for medical school.We are spending countless hours on electronic health records scrolling through layers data find information we need, receiving vital fax machines, listening on-hold music as try help patients progress labyrinthine treatment pathways so that they can get care need.The administrative burden modern become overwhelming.

Language: Английский

Citations

0

Effective Structured Information Extraction from Chest Radiography Reports Using Open-Weights Large Language Models DOI
James C. Gee, Michael S. Yao

Radiology, Journal Year: 2025, Volume and Issue: 314(1)

Published: Jan. 1, 2025

Language: Английский

Citations

0

Evaluating ChatGPT-4 for the Interpretation of Images from Several Diagnostic Techniques in Gastroenterology DOI Open Access
Miguel Mascarenhas, Tiago Ribeiro, B Agudo

et al.

Journal of Clinical Medicine, Journal Year: 2025, Volume and Issue: 14(2), P. 572 - 572

Published: Jan. 17, 2025

Background: Several artificial intelligence systems based on large language models (LLMs) have been commercially developed, with recent interest in integrating them for clinical questions. Recent versions now include image analysis capacity, but their performance gastroenterology remains untested. This study assesses ChatGPT-4's interpreting images. Methods: A total of 740 images from five procedures-capsule endoscopy (CE), device-assisted enteroscopy (DAE), endoscopic ultrasound (EUS), digital single-operator cholangioscopy (DSOC), and high-resolution anoscopy (HRA)-were included analyzed by ChatGPT-4 using a predefined prompt each. predictions were compared to gold standard diagnoses. Statistical analyses accuracy, sensitivity, specificity, positive predictive value (PPV), negative (NPV), area under the curve (AUC). Results: For CE, demonstrated accuracies ranging 50.0% 90.0%, AUCs 0.50-0.90. DAE, model an accuracy 67.0% (AUC 0.670). EUS, system showed 0.488 0.550 differentiation between pancreatic cystic solid lesions, respectively. The LLM differentiated benign malignant biliary strictures AUC 0.550. HRA, overall 47.5% 67.5%. Conclusions: suboptimal diagnostic interpretation across several techniques, highlighting need continuous improvement before adoption.

Language: Английский

Citations

0

Artificial Intelligence Decision Support Systems in Resource-Limited Environments to Save Lives and Reduce Moral Injury DOI

Lindsey Umlauf,

Michael A Remley,

Christopher Colombo

et al.

Military Medicine, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 9, 2025

Future military conflicts are likely to involve peer or near-peer adversaries in large-scale combat operations, leading casualty rates not seen since World War II. Casualty volume, combined with anticipated disruptions medical evacuation, will create resource-limited environments that challenge responders make complex, repetitive triage decisions. Similarly, pandemics, mass incidents, and natural disasters strain civilian health care providers, increasing their risk for exhaustion, burnout, moral injury. As opposed exhaustion which can be mitigated appropriate rest cycles changes workload, injury is a long-lasting impairing condition cognitive, emotional, behavioral, social, spiritual repercussions. Exhaustion burnout experienced by providers during COVID-19 correlated increased disengagement the desire leave field. Telemedicine telementoring expands access expertise, thereby reducing an inexperienced provider's stress levels uncertainty improving confidence delivery. Artificial Intelligence Decision Support Systems (AIDeSSAIDeSS) may represent next phase clinical decision support systems across continuum of care. These help address both scale casualties operations critical expertise gaps future events, disasters. This study advocates urgent research at intersection high-stress, contexts cause potential AIDeSS reduce risk. Understanding these dynamics yield strategies mitigate psychological distress responders, increase patient survival, improve our systems.

Language: Английский

Citations

0

Artificial Intelligence (AI) and emergency medicine: the race to the unknown (Preprint) DOI
F. Amiot,

Benoit Potier,

Thibault Viard

et al.

Published: Jan. 5, 2025

UNSTRUCTURED Artificial intelligence (AI), particularly large-scale language models (LLMs) such as ChatGPT, has emerged a technology of significant impact in various fields, including medicine. This rapid development presents both opportunities and risks, the context emergency medicine, where AI could transform clinical practices, but also raises concerns regarding safety reliability its applications. update aims to evaluate implications medical field, examining potential applications benefits limitations, challenges achieving general artificial (GAI). A literature review was conducted analyze current capabilities AIs health data processing, imaging, process improvement, while addressing raised by hallucination phenomena LLM performance rare or atypical cases. offer substantial advantages triage, patient flow optimization, bed management, care prioritization However, risks remain, hallucinations that can generate erroneous information limitations for infrequent situations, potentially compromising safety. represents revolutionary it necessitates rigorous regulatory approach mitigate associated risks. The implementation standards supervisory practices becomes essential ensure safe effective integration into

Language: Английский

Citations

0

Voice EHR: introducing multimodal audio data for health DOI Creative Commons

James Anibal,

Hannah Huth, Ming Li

et al.

Frontiers in Digital Health, Journal Year: 2025, Volume and Issue: 6

Published: Jan. 28, 2025

Introduction Artificial intelligence (AI) models trained on audio data may have the potential to rapidly perform clinical tasks, enhancing medical decision-making and potentially improving outcomes through early detection. Existing technologies depend limited datasets collected with expensive recording equipment in high-income countries, which challenges deployment resource-constrained, high-volume settings where a profound impact health equity. Methods This report introduces novel protocol for collection corresponding application that captures information guided questions. Results To demonstrate of Voice EHR as biomarker health, initial experiments quality multiple case studies are presented this report. Large language (LLMs) were used compare transcribed (from same patients) conventional techniques like choice Information contained samples was consistently rated equally or more relevant evaluation. Discussion The HEAR facilitates an electronic record (“Voice EHR”) contain complex biomarkers from voice/respiratory features, speech patterns, spoken semantic meaning longitudinal context–potentially compensating typical limitations unimodal datasets.

Language: Английский

Citations

0