Systematic review of ChatGPT accuracy and performance in Iran’s medical licensing exams: A brief report DOI Creative Commons
Alireza Keshtkar, Farnaz Atighi, Hamid Reihani

et al.

Journal of Education and Health Promotion, Journal Year: 2024, Volume and Issue: 13(1)

Published: Nov. 1, 2024

ChatGPT has demonstrated significant potential in various aspects of medicine, including its performance on licensing examinations. In this study, we systematically investigated ChatGPT’s Iranian medical exams and assessed the quality included studies using a previously published assessment checklist. The study found that achieved an accuracy range 32–72% basic science exams, 34–68.5% pre-internship 32–84% residency exams. Notably, was generally higher when input provided English compared to Persian. One reported 40% rate endodontic board exam. To establish as supplementary tool education clinical practice, suggest dedicated guidelines checklists are needed ensure high-quality consistent research emerging field.

Language: Английский

Quantum leap in medical mentorship: exploring ChatGPT’s transition from textbooks to terabytes DOI Creative Commons
Santosh Chokkakula,

Siomui Chong,

Bing Xiang Yang

et al.

Frontiers in Medicine, Journal Year: 2025, Volume and Issue: 12

Published: April 28, 2025

ChatGPT, an advanced AI language model, presents a transformative opportunity in several fields including the medical education. This article examines integration of ChatGPT into healthcare learning environments, exploring its potential to revolutionize knowledge acquisition, personalize education, support curriculum development, and enhance clinical reasoning. The AI’s ability swiftly access synthesize information across various specialties offers significant value students professionals alike. It provides rapid answers queries on theories, treatment guidelines, diagnostic methods, potentially accelerating curve. paper emphasizes necessity verifying ChatGPT’s outputs against authoritative sources. A key advantage highlighted is capacity tailor experiences by assessing individual needs, accommodating diverse styles, offering personalized feedback. also considers role shaping curricula assessment techniques, suggesting that educators may need adapt their methods incorporate AI-driven tools. Additionally, it explores how could bolster problem-solving through AI-powered simulations, fostering critical thinking acumen among students. While recognizing stresses importance thoughtful implementation, continuous validation, establishment protocols ensure responsible effective application education settings.

Language: Английский

Citations

0

Systematic review of ChatGPT accuracy and performance in Iran’s medical licensing exams: A brief report DOI Creative Commons
Alireza Keshtkar, Farnaz Atighi, Hamid Reihani

et al.

Journal of Education and Health Promotion, Journal Year: 2024, Volume and Issue: 13(1)

Published: Nov. 1, 2024

ChatGPT has demonstrated significant potential in various aspects of medicine, including its performance on licensing examinations. In this study, we systematically investigated ChatGPT’s Iranian medical exams and assessed the quality included studies using a previously published assessment checklist. The study found that achieved an accuracy range 32–72% basic science exams, 34–68.5% pre-internship 32–84% residency exams. Notably, was generally higher when input provided English compared to Persian. One reported 40% rate endodontic board exam. To establish as supplementary tool education clinical practice, suggest dedicated guidelines checklists are needed ensure high-quality consistent research emerging field.

Language: Английский

Citations

1