Radiology, Год журнала: 2024, Номер 313(3)
Опубликована: Дек. 1, 2024
Язык: Английский
Radiology, Год журнала: 2024, Номер 313(3)
Опубликована: Дек. 1, 2024
Язык: Английский
Korean Journal of Radiology, Год журнала: 2024, Номер 25(8), С. 687 - 687
Опубликована: Янв. 1, 2024
Язык: Английский
Процитировано
5Diagnostic and Interventional Radiology, Год журнала: 2024, Номер unknown
Опубликована: Сен. 2, 2024
Stroke is a neurological emergency requiring rapid, accurate diagnosis to prevent severe consequences. Early crucial for reducing morbidity and mortality. Artificial intelligence (AI) support tools, such as Chat Generative Pre-trained Transformer (ChatGPT), offer rapid diagnostic advantages. This study assesses ChatGPT's accuracy in interpreting diffusion-weighted imaging (DWI) acute stroke diagnosis.
Язык: Английский
Процитировано
5Clinical Imaging, Год журнала: 2024, Номер 115, С. 110276 - 110276
Опубликована: Сен. 6, 2024
Язык: Английский
Процитировано
5Korean Journal of Radiology, Год журнала: 2025, Номер 26(1), С. 1 - 1
Опубликована: Янв. 1, 2025
Язык: Английский
Процитировано
0Intelligent Medicine, Год журнала: 2025, Номер unknown
Опубликована: Март 1, 2025
Язык: Английский
Процитировано
0Journal of Medical Imaging and Radiation Oncology, Год журнала: 2025, Номер unknown
Опубликована: Апрель 8, 2025
ABSTRACT Background Publicly available artificial intelligence (AI) Vision Language Models (VLMs) are constantly improving. The advent of vision capabilities on these models could enhance radiology workflows. Evaluating their performance in radiological image interpretation is vital to potential integration into practice. Aim This study aims evaluate the proficiency and consistency publicly VLMs, Anthropic's Claude OpenAI's GPT, across multiple iterations basic tasks. Method Subsets from datasets, ROCOv2 MURAv1.1, were used 6 VLMs. A system prompt input each model three times. outputs compared dataset captions model's accuracy recognising modality, anatomy, detecting fractures radiographs. output was also analysed. Results Evaluation showed high modality recognition, with some achieving 100%. Anatomical recognition ranged between 61% 85% all tested. On MURAv1.1 dataset, Claude‐3.5‐Sonnet had highest anatomical 57% accuracy, while GPT‐4o best fracture detection 62% accuracy. most consistent model, 83% 92% anatomy detection, respectively. Conclusion Given GPT's current reliability, clinical settings not yet feasible. highlights need for ongoing development establishment standardised testing techniques ensure achieve reliable performance.
Язык: Английский
Процитировано
0The Knee, Год журнала: 2025, Номер 55, С. 79 - 84
Опубликована: Апрель 23, 2025
Язык: Английский
Процитировано
0Korean Journal of Radiology, Год журнала: 2025, Номер 26
Опубликована: Янв. 1, 2025
Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed evaluate performance GPT-4 Turbo and GPT-4o in resident examinations, analyze differences across question types, compare results with those residents at different levels. A total 776 multiple-choice from Korean Society Radiology In-Training Examinations were used, forming two sets: one originally written other translated into English. We evaluated (gpt-4-turbo-2024-04-09) (gpt-4o-2024-11-20) on these temperature set zero, determining accuracy based majority vote five independent trials. analyzed using type (text-only vs. image-based) benchmarked them against nationwide residents' performance. The impact input (Korean or English) model was examined. outperformed both (48.2% 41.8%, P = 0.002) text-only (77.9% 69.0%, 0.031). On questions, showed comparable that 1st-year (41.8% 48.2%, respectively, 43.3%, 0.608 0.079, respectively) but lower than 2nd- 4th-year (vs. 56.0%-63.9%, all ≤ 0.005). For performed better years (69.0% 77.9%, 44.7%-57.5%, 0.039). Performance English- Korean-version no significant either (all ≥ 0.275). types. models' matched higher-year residents. Both demonstrated superior compared questions. consistent performances English inputs.
Язык: Английский
Процитировано
0arXiv (Cornell University), Год журнала: 2024, Номер 11, С. 1395934 - 1395934
Опубликована: Янв. 1, 2024
ChatGPT, the most accessible generative artificial intelligence (AI) tool, offers considerable potential for veterinary medicine, yet a dedicated review of its specific applications is lacking. This concisely synthesizes latest research and practical ChatGPT within clinical, educational, domains medicine. It intends to provide guidance actionable examples how AI can be directly utilized by professionals without programming background. For practitioners, extract patient data, generate progress notes, potentially assist in diagnosing complex cases. Veterinary educators create custom GPTs student support, while students utilize exam preparation. aid academic writing tasks research, but publishers have set requirements authors follow. Despite transformative potential, careful use essential avoid pitfalls like hallucination. addresses ethical considerations, provides learning resources, tangible guide responsible implementation. A table key takeaways was provided summarize this review. By highlighting benefits limitations, equips veterinarians, educators, researchers harness power effectively.
Язык: Английский
Процитировано
3Radiology, Год журнала: 2024, Номер 312(3)
Опубликована: Сен. 1, 2024
Язык: Английский
Процитировано
2