Commendations and Concerns on the Analysis of Stochasticity in Large Language Models DOI
Seong Ho Park, Hyungjin Kim, Soon Ho Yoon

et al.

Radiology, Journal Year: 2024, Volume and Issue: 313(3)

Published: Dec. 1, 2024

Language: Английский

Reporting Guidelines for Artificial Intelligence Studies in Healthcare (for Both Conventional and Large Language Models): What’s New in 2024 DOI
Seong Ho Park, Chong Hyun Suh

Korean Journal of Radiology, Journal Year: 2024, Volume and Issue: 25(8), P. 687 - 687

Published: Jan. 1, 2024

Language: Английский

Citations

5

A retrospective evaluation of the potential of ChatGPT in the accurate diagnosis of acute stroke DOI Creative Commons
Beyza Nur Kuzan, İsmail Meşe, Servan Yaşar

et al.

Diagnostic and Interventional Radiology, Journal Year: 2024, Volume and Issue: unknown

Published: Sept. 2, 2024

Stroke is a neurological emergency requiring rapid, accurate diagnosis to prevent severe consequences. Early crucial for reducing morbidity and mortality. Artificial intelligence (AI) support tools, such as Chat Generative Pre-trained Transformer (ChatGPT), offer rapid diagnostic advantages. This study assesses ChatGPT's accuracy in interpreting diffusion-weighted imaging (DWI) acute stroke diagnosis.

Language: Английский

Citations

5

Encouragement vs. liability: How prompt engineering influences ChatGPT-4's radiology exam performance DOI
Daniel Nguyen, Allison M. MacKenzie, Young H. Kim

et al.

Clinical Imaging, Journal Year: 2024, Volume and Issue: 115, P. 110276 - 110276

Published: Sept. 6, 2024

Language: Английский

Citations

5

Reflections on 2024 and Perspectives for 2025 for KJR DOI
Seong Ho Park

Korean Journal of Radiology, Journal Year: 2025, Volume and Issue: 26(1), P. 1 - 1

Published: Jan. 1, 2025

Language: Английский

Citations

0

Evaluating large language models and agents in healthcare: key challenges in clinical applications DOI Creative Commons
Xiaolan Chen, Jie Xiang,

Shanfu Lu

et al.

Intelligent Medicine, Journal Year: 2025, Volume and Issue: unknown

Published: March 1, 2025

Language: Английский

Citations

0

Comparative Performance of Anthropic Claude and OpenAI GPT Models in Basic Radiological Imaging Tasks DOI

Cindy Nguyen,

Daniel Carrion, Mohamed Khaldoun Badawy

et al.

Journal of Medical Imaging and Radiation Oncology, Journal Year: 2025, Volume and Issue: unknown

Published: April 8, 2025

ABSTRACT Background Publicly available artificial intelligence (AI) Vision Language Models (VLMs) are constantly improving. The advent of vision capabilities on these models could enhance radiology workflows. Evaluating their performance in radiological image interpretation is vital to potential integration into practice. Aim This study aims evaluate the proficiency and consistency publicly VLMs, Anthropic's Claude OpenAI's GPT, across multiple iterations basic tasks. Method Subsets from datasets, ROCOv2 MURAv1.1, were used 6 VLMs. A system prompt input each model three times. outputs compared dataset captions model's accuracy recognising modality, anatomy, detecting fractures radiographs. output was also analysed. Results Evaluation showed high modality recognition, with some achieving 100%. Anatomical recognition ranged between 61% 85% all tested. On MURAv1.1 dataset, Claude‐3.5‐Sonnet had highest anatomical 57% accuracy, while GPT‐4o best fracture detection 62% accuracy. most consistent model, 83% 92% anatomy detection, respectively. Conclusion Given GPT's current reliability, clinical settings not yet feasible. highlights need for ongoing development establishment standardised testing techniques ensure achieve reliable performance.

Language: Английский

Citations

0

Evaluating artificial intelligence performance in medical image analysis: Sensitivity, specificity, accuracy, and precision of ChatGPT-4o on Kellgren-Lawrence grading of knee X-ray radiographs DOI
Mustafa Hüseyin Temel, Yakup Erden, Fatih Bağcıer

et al.

The Knee, Journal Year: 2025, Volume and Issue: 55, P. 79 - 84

Published: April 23, 2025

Language: Английский

Citations

0

Performance of GPT-4 Turbo and GPT-4o in Korean Society of Radiology In-Training Examinations DOI
Arum Choi, Hyun Gi Kim, Moon Hyung Choi

et al.

Korean Journal of Radiology, Journal Year: 2025, Volume and Issue: 26

Published: Jan. 1, 2025

Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed evaluate performance GPT-4 Turbo and GPT-4o in resident examinations, analyze differences across question types, compare results with those residents at different levels. A total 776 multiple-choice from Korean Society Radiology In-Training Examinations were used, forming two sets: one originally written other translated into English. We evaluated (gpt-4-turbo-2024-04-09) (gpt-4o-2024-11-20) on these temperature set zero, determining accuracy based majority vote five independent trials. analyzed using type (text-only vs. image-based) benchmarked them against nationwide residents' performance. The impact input (Korean or English) model was examined. outperformed both (48.2% 41.8%, P = 0.002) text-only (77.9% 69.0%, 0.031). On questions, showed comparable that 1st-year (41.8% 48.2%, respectively, 43.3%, 0.608 0.079, respectively) but lower than 2nd- 4th-year (vs. 56.0%-63.9%, all ≤ 0.005). For performed better years (69.0% 77.9%, 44.7%-57.5%, 0.039). Performance English- Korean-version no significant either (all ≥ 0.275). types. models' matched higher-year residents. Both demonstrated superior compared questions. consistent performances English inputs.

Language: Английский

Citations

0

ChatGPT in veterinary medicine: a practical guidance of generative artificial intelligence in clinics, education, and research. DOI Open Access
Candice P. Chu

arXiv (Cornell University), Journal Year: 2024, Volume and Issue: 11, P. 1395934 - 1395934

Published: Jan. 1, 2024

ChatGPT, the most accessible generative artificial intelligence (AI) tool, offers considerable potential for veterinary medicine, yet a dedicated review of its specific applications is lacking. This concisely synthesizes latest research and practical ChatGPT within clinical, educational, domains medicine. It intends to provide guidance actionable examples how AI can be directly utilized by professionals without programming background. For practitioners, extract patient data, generate progress notes, potentially assist in diagnosing complex cases. Veterinary educators create custom GPTs student support, while students utilize exam preparation. aid academic writing tasks research, but publishers have set requirements authors follow. Despite transformative potential, careful use essential avoid pitfalls like hallucination. addresses ethical considerations, provides learning resources, tangible guide responsible implementation. A table key takeaways was provided summarize this review. By highlighting benefits limitations, equips veterinarians, educators, researchers harness power effectively.

Language: Английский

Citations

3

Challenges and Proposed Additional Considerations for Medical Device Approval of Large Language Models Beyond Conventional AI DOI
Seong Ho Park, Namkug Kim

Radiology, Journal Year: 2024, Volume and Issue: 312(3)

Published: Sept. 1, 2024

Language: Английский

Citations

2