Medical Records, Journal Year: 2024, Volume and Issue: 7(1), P. 162 - 166
Published: Dec. 19, 2024
Aim: This study aims to evaluate the performance of ChatGPT-4.0 model in answering questions from Turkish Dentistry Specialization Exam (DUS), comparing it with DUS examinees and exploring model’s clinical reasoning capabilities its potential educational value dental training. The objective is identify strengths limitations ChatGPT when tasked responding typically presented this critical examination for professionals. Material Method: analyzed years 2012 2017, focusing on basic medical sciences sections. ChatGPT's responses these were compared average scores examinees, who had previously taken exam. A statistical analysis was performed assess significance differences between human examinees. Results: significantly outperformed both sections across all analyzed. revealed that statistically significant, demonstrating superior accuracy years. Conclusion: ChatGPT’s demonstrates as a supplementary tool education exam preparation. However, future research should focus integrating AI into practical training, particularly assessing real-world applicability. replicating hands-on decision-making unpredictable environments must also be considered.
Language: Английский