Commendations and Concerns on the Analysis of Stochasticity in Large Language Models DOI
Seong Ho Park, Hyungjin Kim, Soon Ho Yoon

и другие.

Radiology, Год журнала: 2024, Номер 313(3)

Опубликована: Дек. 1, 2024

Язык: Английский

Reporting Guidelines for Artificial Intelligence Studies in Healthcare (for Both Conventional and Large Language Models): What’s New in 2024 DOI
Seong Ho Park, Chong Hyun Suh

Korean Journal of Radiology, Год журнала: 2024, Номер 25(8), С. 687 - 687

Опубликована: Янв. 1, 2024

Язык: Английский

Процитировано

5

A retrospective evaluation of the potential of ChatGPT in the accurate diagnosis of acute stroke DOI Creative Commons
Beyza Nur Kuzan, İsmail Meşe, Servan Yaşar

и другие.

Diagnostic and Interventional Radiology, Год журнала: 2024, Номер unknown

Опубликована: Сен. 2, 2024

Stroke is a neurological emergency requiring rapid, accurate diagnosis to prevent severe consequences. Early crucial for reducing morbidity and mortality. Artificial intelligence (AI) support tools, such as Chat Generative Pre-trained Transformer (ChatGPT), offer rapid diagnostic advantages. This study assesses ChatGPT's accuracy in interpreting diffusion-weighted imaging (DWI) acute stroke diagnosis.

Язык: Английский

Процитировано

5

Encouragement vs. liability: How prompt engineering influences ChatGPT-4's radiology exam performance DOI
Daniel Nguyen, Allison M. MacKenzie, Young H. Kim

и другие.

Clinical Imaging, Год журнала: 2024, Номер 115, С. 110276 - 110276

Опубликована: Сен. 6, 2024

Язык: Английский

Процитировано

5

Reflections on 2024 and Perspectives for 2025 for KJR DOI
Seong Ho Park

Korean Journal of Radiology, Год журнала: 2025, Номер 26(1), С. 1 - 1

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Evaluating large language models and agents in healthcare: key challenges in clinical applications DOI Creative Commons
Xiaolan Chen, Jie Xiang,

Shanfu Lu

и другие.

Intelligent Medicine, Год журнала: 2025, Номер unknown

Опубликована: Март 1, 2025

Язык: Английский

Процитировано

0

Comparative Performance of Anthropic Claude and OpenAI GPT Models in Basic Radiological Imaging Tasks DOI

Cindy Nguyen,

Daniel Carrion, Mohamed Khaldoun Badawy

и другие.

Journal of Medical Imaging and Radiation Oncology, Год журнала: 2025, Номер unknown

Опубликована: Апрель 8, 2025

ABSTRACT Background Publicly available artificial intelligence (AI) Vision Language Models (VLMs) are constantly improving. The advent of vision capabilities on these models could enhance radiology workflows. Evaluating their performance in radiological image interpretation is vital to potential integration into practice. Aim This study aims evaluate the proficiency and consistency publicly VLMs, Anthropic's Claude OpenAI's GPT, across multiple iterations basic tasks. Method Subsets from datasets, ROCOv2 MURAv1.1, were used 6 VLMs. A system prompt input each model three times. outputs compared dataset captions model's accuracy recognising modality, anatomy, detecting fractures radiographs. output was also analysed. Results Evaluation showed high modality recognition, with some achieving 100%. Anatomical recognition ranged between 61% 85% all tested. On MURAv1.1 dataset, Claude‐3.5‐Sonnet had highest anatomical 57% accuracy, while GPT‐4o best fracture detection 62% accuracy. most consistent model, 83% 92% anatomy detection, respectively. Conclusion Given GPT's current reliability, clinical settings not yet feasible. highlights need for ongoing development establishment standardised testing techniques ensure achieve reliable performance.

Язык: Английский

Процитировано

0

Evaluating artificial intelligence performance in medical image analysis: Sensitivity, specificity, accuracy, and precision of ChatGPT-4o on Kellgren-Lawrence grading of knee X-ray radiographs DOI
Mustafa Hüseyin Temel, Yakup Erden, Fatih Bağcıer

и другие.

The Knee, Год журнала: 2025, Номер 55, С. 79 - 84

Опубликована: Апрель 23, 2025

Язык: Английский

Процитировано

0

Performance of GPT-4 Turbo and GPT-4o in Korean Society of Radiology In-Training Examinations DOI
Arum Choi, Hyun Gi Kim, Moon Hyung Choi

и другие.

Korean Journal of Radiology, Год журнала: 2025, Номер 26

Опубликована: Янв. 1, 2025

Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed evaluate performance GPT-4 Turbo and GPT-4o in resident examinations, analyze differences across question types, compare results with those residents at different levels. A total 776 multiple-choice from Korean Society Radiology In-Training Examinations were used, forming two sets: one originally written other translated into English. We evaluated (gpt-4-turbo-2024-04-09) (gpt-4o-2024-11-20) on these temperature set zero, determining accuracy based majority vote five independent trials. analyzed using type (text-only vs. image-based) benchmarked them against nationwide residents' performance. The impact input (Korean or English) model was examined. outperformed both (48.2% 41.8%, P = 0.002) text-only (77.9% 69.0%, 0.031). On questions, showed comparable that 1st-year (41.8% 48.2%, respectively, 43.3%, 0.608 0.079, respectively) but lower than 2nd- 4th-year (vs. 56.0%-63.9%, all ≤ 0.005). For performed better years (69.0% 77.9%, 44.7%-57.5%, 0.039). Performance English- Korean-version no significant either (all ≥ 0.275). types. models' matched higher-year residents. Both demonstrated superior compared questions. consistent performances English inputs.

Язык: Английский

Процитировано

0

ChatGPT in veterinary medicine: a practical guidance of generative artificial intelligence in clinics, education, and research. DOI Open Access
Candice P. Chu

arXiv (Cornell University), Год журнала: 2024, Номер 11, С. 1395934 - 1395934

Опубликована: Янв. 1, 2024

ChatGPT, the most accessible generative artificial intelligence (AI) tool, offers considerable potential for veterinary medicine, yet a dedicated review of its specific applications is lacking. This concisely synthesizes latest research and practical ChatGPT within clinical, educational, domains medicine. It intends to provide guidance actionable examples how AI can be directly utilized by professionals without programming background. For practitioners, extract patient data, generate progress notes, potentially assist in diagnosing complex cases. Veterinary educators create custom GPTs student support, while students utilize exam preparation. aid academic writing tasks research, but publishers have set requirements authors follow. Despite transformative potential, careful use essential avoid pitfalls like hallucination. addresses ethical considerations, provides learning resources, tangible guide responsible implementation. A table key takeaways was provided summarize this review. By highlighting benefits limitations, equips veterinarians, educators, researchers harness power effectively.

Язык: Английский

Процитировано

3

Challenges and Proposed Additional Considerations for Medical Device Approval of Large Language Models Beyond Conventional AI DOI
Seong Ho Park, Namkug Kim

Radiology, Год журнала: 2024, Номер 312(3)

Опубликована: Сен. 1, 2024

Язык: Английский

Процитировано

2