Generative AI’s healthcare professional role creep: a cross-sectional evaluation of publicly accessible, customised health-related GPTs DOI Creative Commons
Benjamin Chu, Natansh D. Modi, Bradley D. Menz

et al.

Frontiers in Public Health, Journal Year: 2025, Volume and Issue: 13

Published: May 9, 2025

Introduction Generative artificial intelligence (AI) is advancing rapidly; an important consideration the public’s increasing ability to customise foundational AI models create publicly accessible applications tailored for specific tasks. This study aims evaluate accessibility and functionality descriptions of customised GPTs on OpenAI GPT store that provide health-related information or assistance patients healthcare professionals. Methods We conducted a cross-sectional observational from September 2 6, 2024, identify with functions. searched across general medicine, psychology, oncology, cardiology, immunology applications. Identified were assessed their name, description, intended audience, usage. Regulatory status was checked U.S. Food Drug Administration (FDA), European Union Medical Device Regulation (EU MDR), Australian Therapeutic Goods (TGA) databases. Results A total 1,055 customised, targeting professionals identified, which had collectively been used in over 360,000 conversations. Of these, 587 psychology-related, 247 105 52 30 immunology, 34 other health specialties. Notably, 624 identified included professional titles (e.g., doctor, nurse, psychiatrist, oncologist) names and/or descriptions, suggesting they taking such roles. None FDA, EU MDR, TGA-approved. Discussion highlights rapid emergence accessible, GPTs. The findings raise questions about whether current medical device regulations are keeping pace technological advancements. results also highlight potential “role creep” chatbots, where begin perform — claim functions traditionally reserved licensed professionals, underscoring safety concerns.

Language: Английский

Overview of South Korean Guidelines for Approval of Large Language or Multimodal Models as Medical Devices: Key Features and Areas for Improvement DOI
Seong Ho Park,

Geraldine Dean,

Eduardo Ortíz

et al.

Korean Journal of Radiology, Journal Year: 2025, Volume and Issue: 26

Published: Jan. 1, 2025

Language: Английский

Citations

0

Regulation of AI: Learnings from Medical Education DOI
Kerstin Noëlle Vokinger, Derek Soled, Raja-Elie E. Abdulnour

et al.

NEJM AI, Journal Year: 2025, Volume and Issue: 2(5)

Published: April 24, 2025

Language: Английский

Citations

0

A Current Review of Generative AI in Medicine: Core Concepts, Applications, and Current Limitations DOI

Pouria Rouzrokh,

Bardia Khosravi, Shahriar Faghani

et al.

Current Reviews in Musculoskeletal Medicine, Journal Year: 2025, Volume and Issue: unknown

Published: April 30, 2025

Language: Английский

Citations

0

Generalizability of FDA-Approved AI-Enabled Medical Devices for Clinical Use DOI Creative Commons
Daniel Windecker, Giovanni Baj, Isaac Shiri

et al.

JAMA Network Open, Journal Year: 2025, Volume and Issue: 8(4), P. e258052 - e258052

Published: April 30, 2025

Importance The primary objective of any newly developed medical device using artificial intelligence (AI) is to ensure its safe and effective use in broader clinical practice. Objective To evaluate key characteristics AI-enabled devices approved by the US Food Drug Administration (FDA) that are relevant their generalizability reported public domain. Design, Setting, Participants This cross-sectional study collected information on all received FDA approval were listed website as August 31, 2024. Main Outcomes Measures For each device, detailed for at time summarized, specifically examining evaluation aspects, such presence design performance studies, availability discriminatory metrics, age- sex-specific data. Results In total, 903 FDA-approved analyzed, most which became available last decade. primarily related specialties radiology (692 [76.6.%]), cardiovascular medicine (91 [10.1%]), neurology (29 [3.2%]). Most software only (664 [73.5%]), 6 (0.7%) implantable. Detailed descriptions development absent from publicly provided summaries. Clinical studies 505 (55.9%), while 218 (24.1%) explicitly stated no conducted. Retrospective designs common (193 [38.2%]), with 41 (8.1%) being prospective 12 (2.4%) randomized. Discriminatory metrics 200 summaries (sensitivity: 183 [36.2%]; specificity: 176 [34.9%]; area under curve: 82 [16.2%]). Among less than one-third data (145 [28.7%]), 117 (23.2%) addressed age-related subgroups. Conclusions Relevance this study, approximately half devices, yet was often insufficient a comprehensive assessment generalizability, emphasizing need ongoing monitoring regular re-evaluation identify address unexpected changes during use.

Language: Английский

Citations

0

Generative AI’s healthcare professional role creep: a cross-sectional evaluation of publicly accessible, customised health-related GPTs DOI Creative Commons
Benjamin Chu, Natansh D. Modi, Bradley D. Menz

et al.

Frontiers in Public Health, Journal Year: 2025, Volume and Issue: 13

Published: May 9, 2025

Introduction Generative artificial intelligence (AI) is advancing rapidly; an important consideration the public’s increasing ability to customise foundational AI models create publicly accessible applications tailored for specific tasks. This study aims evaluate accessibility and functionality descriptions of customised GPTs on OpenAI GPT store that provide health-related information or assistance patients healthcare professionals. Methods We conducted a cross-sectional observational from September 2 6, 2024, identify with functions. searched across general medicine, psychology, oncology, cardiology, immunology applications. Identified were assessed their name, description, intended audience, usage. Regulatory status was checked U.S. Food Drug Administration (FDA), European Union Medical Device Regulation (EU MDR), Australian Therapeutic Goods (TGA) databases. Results A total 1,055 customised, targeting professionals identified, which had collectively been used in over 360,000 conversations. Of these, 587 psychology-related, 247 105 52 30 immunology, 34 other health specialties. Notably, 624 identified included professional titles (e.g., doctor, nurse, psychiatrist, oncologist) names and/or descriptions, suggesting they taking such roles. None FDA, EU MDR, TGA-approved. Discussion highlights rapid emergence accessible, GPTs. The findings raise questions about whether current medical device regulations are keeping pace technological advancements. results also highlight potential “role creep” chatbots, where begin perform — claim functions traditionally reserved licensed professionals, underscoring safety concerns.

Language: Английский

Citations

0