Radiology, Journal Year: 2024, Volume and Issue: 313(3)
Published: Dec. 1, 2024
Language: Английский
Radiology, Journal Year: 2024, Volume and Issue: 313(3)
Published: Dec. 1, 2024
Language: Английский
Korean Journal of Radiology, Journal Year: 2024, Volume and Issue: 25(8), P. 687 - 687
Published: Jan. 1, 2024
Language: Английский
Citations
5Diagnostic and Interventional Radiology, Journal Year: 2024, Volume and Issue: unknown
Published: Sept. 2, 2024
Stroke is a neurological emergency requiring rapid, accurate diagnosis to prevent severe consequences. Early crucial for reducing morbidity and mortality. Artificial intelligence (AI) support tools, such as Chat Generative Pre-trained Transformer (ChatGPT), offer rapid diagnostic advantages. This study assesses ChatGPT's accuracy in interpreting diffusion-weighted imaging (DWI) acute stroke diagnosis.
Language: Английский
Citations
5Clinical Imaging, Journal Year: 2024, Volume and Issue: 115, P. 110276 - 110276
Published: Sept. 6, 2024
Language: Английский
Citations
5Korean Journal of Radiology, Journal Year: 2025, Volume and Issue: 26(1), P. 1 - 1
Published: Jan. 1, 2025
Language: Английский
Citations
0Intelligent Medicine, Journal Year: 2025, Volume and Issue: unknown
Published: March 1, 2025
Language: Английский
Citations
0Journal of Medical Imaging and Radiation Oncology, Journal Year: 2025, Volume and Issue: unknown
Published: April 8, 2025
ABSTRACT Background Publicly available artificial intelligence (AI) Vision Language Models (VLMs) are constantly improving. The advent of vision capabilities on these models could enhance radiology workflows. Evaluating their performance in radiological image interpretation is vital to potential integration into practice. Aim This study aims evaluate the proficiency and consistency publicly VLMs, Anthropic's Claude OpenAI's GPT, across multiple iterations basic tasks. Method Subsets from datasets, ROCOv2 MURAv1.1, were used 6 VLMs. A system prompt input each model three times. outputs compared dataset captions model's accuracy recognising modality, anatomy, detecting fractures radiographs. output was also analysed. Results Evaluation showed high modality recognition, with some achieving 100%. Anatomical recognition ranged between 61% 85% all tested. On MURAv1.1 dataset, Claude‐3.5‐Sonnet had highest anatomical 57% accuracy, while GPT‐4o best fracture detection 62% accuracy. most consistent model, 83% 92% anatomy detection, respectively. Conclusion Given GPT's current reliability, clinical settings not yet feasible. highlights need for ongoing development establishment standardised testing techniques ensure achieve reliable performance.
Language: Английский
Citations
0The Knee, Journal Year: 2025, Volume and Issue: 55, P. 79 - 84
Published: April 23, 2025
Language: Английский
Citations
0Korean Journal of Radiology, Journal Year: 2025, Volume and Issue: 26
Published: Jan. 1, 2025
Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed evaluate performance GPT-4 Turbo and GPT-4o in resident examinations, analyze differences across question types, compare results with those residents at different levels. A total 776 multiple-choice from Korean Society Radiology In-Training Examinations were used, forming two sets: one originally written other translated into English. We evaluated (gpt-4-turbo-2024-04-09) (gpt-4o-2024-11-20) on these temperature set zero, determining accuracy based majority vote five independent trials. analyzed using type (text-only vs. image-based) benchmarked them against nationwide residents' performance. The impact input (Korean or English) model was examined. outperformed both (48.2% 41.8%, P = 0.002) text-only (77.9% 69.0%, 0.031). On questions, showed comparable that 1st-year (41.8% 48.2%, respectively, 43.3%, 0.608 0.079, respectively) but lower than 2nd- 4th-year (vs. 56.0%-63.9%, all ≤ 0.005). For performed better years (69.0% 77.9%, 44.7%-57.5%, 0.039). Performance English- Korean-version no significant either (all ≥ 0.275). types. models' matched higher-year residents. Both demonstrated superior compared questions. consistent performances English inputs.
Language: Английский
Citations
0arXiv (Cornell University), Journal Year: 2024, Volume and Issue: 11, P. 1395934 - 1395934
Published: Jan. 1, 2024
ChatGPT, the most accessible generative artificial intelligence (AI) tool, offers considerable potential for veterinary medicine, yet a dedicated review of its specific applications is lacking. This concisely synthesizes latest research and practical ChatGPT within clinical, educational, domains medicine. It intends to provide guidance actionable examples how AI can be directly utilized by professionals without programming background. For practitioners, extract patient data, generate progress notes, potentially assist in diagnosing complex cases. Veterinary educators create custom GPTs student support, while students utilize exam preparation. aid academic writing tasks research, but publishers have set requirements authors follow. Despite transformative potential, careful use essential avoid pitfalls like hallucination. addresses ethical considerations, provides learning resources, tangible guide responsible implementation. A table key takeaways was provided summarize this review. By highlighting benefits limitations, equips veterinarians, educators, researchers harness power effectively.
Language: Английский
Citations
3Radiology, Journal Year: 2024, Volume and Issue: 312(3)
Published: Sept. 1, 2024
Language: Английский
Citations
2