Evaluating the Cognitive Levels of Generative AI via Bloom’s Taxonomy: A Cross-sectional Study (Preprint) DOI Creative Commons
Kuan‐Ju Huang, Cheng-Heng Liu,

Chien-Chun Wu

и другие.

Опубликована: Май 15, 2024

BACKGROUND Generative AI has garnered awareness in the medical field, yet its potential is constrained by inherent limitations. By responding to inputs through predicting next word from memory-based archive, we aim explore some of these constraints a education and psychological perspective, utilizing Bloom’s taxonomy. OBJECTIVE To assess AI's cognitive functions sector examining performance licensing exams applying Bloom's METHODS Questions Taiwan Medical Licensing Examination (TMLE) (August 2022) third step United States (USMLE) were classified based on taxonomy levels. The ChatGPT versions tasked individual prompts, with questions entered separately into ChatGPT-3.5 ChatGPT-4 using different accounts. After each response, chat logs erased reset ensure independence answer. Responses ChatGPT-4, collected between January February 2024, analyzed. both available online during study period. RESULTS Although overall surpassed that ChatGPT-3.5, analysis responses models across various levels revealed no significant correlation their This lack significance persisted even when considering strength ChatGPTs extensive databases under "remember," compared other labeled as "non-remember." CONCLUSIONS In may utilize "remember" function answer all types categories defined Further research required focusing versions, specialties, level difficulty assessed individuals backgrounds.

Язык: Английский

Case-based MCQ generator: A custom ChatGPT based on published prompts in the literature for automatic item generation DOI
Yavuz Selim Kıyak, Andrzej A. Kononowicz

Medical Teacher, Год журнала: 2024, Номер 46(8), С. 1018 - 1020

Опубликована: Фев. 10, 2024

What is the Educational Challenge? A fundamental challenge in medical education creating high-quality, clinically relevant multiple-choice questions (MCQs). ChatGPT-based automatic item generation (AIG) methods need well-designed prompts. However, use of these prompts hindered by time-consuming process copying and pasting, a lack know-how among teachers, generalist nature standard ChatGPT, which often lacks context.

Язык: Английский

Процитировано

25

Preparing for Artificial General Intelligence (AGI) in Health Professions Education: AMEE Guide No. 172 DOI Creative Commons
Ken Masters, Anne Herrmann–Werner, Teresa Festl‐Wietek

и другие.

Medical Teacher, Год журнала: 2024, Номер 46(10), С. 1258 - 1271

Опубликована: Авг. 8, 2024

Generative Artificial Intelligence (GenAI) caught Health Professions Education (HPE) institutions off-guard, and they are currently adjusting to a changed educational environment. On the horizon, however, is

Язык: Английский

Процитировано

10

Enhancing clinical decision‐making: Optimizing ChatGPT's performance in hypertension care DOI Creative Commons
Jing Miao, Charat Thongprayoon, Tibor Fülöp

и другие.

Journal of Clinical Hypertension, Год журнала: 2024, Номер 26(5), С. 588 - 593

Опубликована: Апрель 22, 2024

Язык: Английский

Процитировано

9

Artificial Intelligence in Health Professions Education assessment: AMEE Guide No. 178 DOI
Ken Masters, Heather MacNeill, Jennifer Benjamin

и другие.

Medical Teacher, Год журнала: 2025, Номер unknown, С. 1 - 15

Опубликована: Янв. 9, 2025

Health Professions Education (HPE) assessment is being increasingly impacted by Artificial Intelligence (AI), and institutions, educators, learners are grappling with AI's ever-evolving complexities, dangers, potential. This AMEE Guide aims to assist all HPE stakeholders helping them navigate the uncertainty before them. Although impetus AI, grounds its path in pedagogical theory, considers range of human responses, then deals types, challenges, AI roles as tutor learner, required competencies. It discusses difficult ethical issues, ending considerations for faculty development technicalities acknowledgment assessment. Through this Guide, we aim allay fears face change demonstrate possibilities that will allow educators harness full potential

Язык: Английский

Процитировано

1

Expert assessment of ChatGPT’s ability to generate illness scripts: an evaluative study DOI Creative Commons
Yasutaka Yanagita, Daiki Yokokawa, Fumitoshi Fukuzawa

и другие.

BMC Medical Education, Год журнала: 2024, Номер 24(1)

Опубликована: Май 15, 2024

Abstract Background An illness script is a specific format geared to represent patient-oriented clinical knowledge organized around enabling conditions, faults (i.e., pathophysiological process), and consequences. Generative artificial intelligence (AI) stands out as an educational aid in continuing medical education. The effortless creation of typical by generative AI could help the comprehension key features diseases increase diagnostic accuracy. No systematic summary examples scripts has been reported since are unique each physician. Objective This study investigated whether can generate scripts. Methods We utilized ChatGPT-4, AI, create for 184 based on conditions integral National Model Core Curriculum Japan undergraduate education (2022 revised edition) primary care specialist training Japan. Three physicians applied three-tier grading scale: “A” denotes that content disease’s proves sufficient students, “B” it partially lacking but acceptable, “C” deficient multiple respects. Results By leveraging we successfully generated component without any omission. received “A,” “B,” ratings 56.0% (103/184), 28.3% (52/184), 15.8% (29/184), respectively. Conclusion Useful were seamlessly instantaneously created using ChatGPT-4 employing prompts appropriate students. technology-driven valuable tool introducing students diseases.

Язык: Английский

Процитировано

7

Custom GPTs Enhancing Performance and Evidence Compared with GPT-3.5, GPT-4, and GPT-4o? A Study on the Emergency Medicine Specialist Examination DOI Open Access

C F Liu,

Chien‐Ta Bruce Ho, Tzu-Chi Wu

и другие.

Healthcare, Год журнала: 2024, Номер 12(17), С. 1726 - 1726

Опубликована: Авг. 30, 2024

Given the widespread application of ChatGPT, we aim to evaluate its proficiency in emergency medicine specialty written examination. Additionally, compare performance GPT-3.5, GPT-4, GPTs, and GPT-4o. The research seeks ascertain whether custom GPTs possess essential capabilities access knowledge bases necessary for providing accurate information, explore effectiveness potential personalized supporting education medical residents. We evaluated ChatGPT-3.5, GPT-4o on Emergency Medicine Specialist Examination Taiwan. Two hundred single-choice exam questions were provided these AI models, their responses recorded. Correct rates compared among four McNemar test was applied paired model data determine if there significant changes performance. Out 200 questions, correctly answered 77, 105, 119, 138 respectively. demonstrated highest performance, significantly better than which, turn, outperformed while exhibited superior GPT-4 but inferior GPT-4o, with all p < 0.05. In exam, our findings highlight value large language models (LLMs), strengths limitations, especially question types image-inclusion capabilities. Not only do facilitate preparation, they also elevate evidence level source accuracy, demonstrating transform educational frameworks clinical practices medicine.

Язык: Английский

Процитировано

6

Using large language models (ChatGPT, Copilot, PaLM, Bard, and Gemini) in Gross Anatomy course: Comparative analysis DOI Open Access
Volodymyr Mavrych,

Paul Ganguly,

Olena Bolgova

и другие.

Clinical Anatomy, Год журнала: 2024, Номер unknown

Опубликована: Ноя. 21, 2024

The increasing application of generative artificial intelligence large language models (LLMs) in various fields, including medical education, raises questions about their accuracy. primary aim our study was to undertake a detailed comparative analysis the proficiencies and accuracies six different LLMs (ChatGPT-4, ChatGPT-3.5-turbo, ChatGPT-3.5, Copilot, PaLM, Bard, Gemini) responding multiple-choice (MCQs), generating clinical scenarios MCQs for upper limb topics Gross Anatomy course students. Selected chatbots were tested, answering 50 USMLE-style MCQs. randomly selected from exam database students reviewed by three independent experts. results five successive attempts answer each set evaluated terms accuracy, relevance, comprehensiveness. best result provided ChatGPT-4, which answered 60.5% ± 1.9% accurately, then Copilot (42.0% 0.0%) ChatGPT-3.5 (41.0% 5.3%), followed ChatGPT-3.5-turbo (38.5% 5.7%). Google PaLM 2 (34.5% 4.4%) Bard (33.5% 3.0%) gave poorest results. overall performance GPT-4 statistically superior (p < 0.05) those GPT-3.5, GPT-Turbo, PaLM2, 18.6%, 19.5%, 22%, 26%, 27%, respectively. Each chatbot asked generate scenario topics-anatomical snuffbox, supracondylar fracture humerus, cubital fossa-and related anatomical with options each, indicate correct answers. Two experts analyzed graded 216 records received (0-5 scale). recorded Gemini, 2; had lowest grade. Technological progress notwithstanding, have yet mature sufficiently take over role teacher or facilitator completely within course; however, they can be valuable tools educators.

Язык: Английский

Процитировано

6

Creating custom GPTs for faculty development: An example using the Johari Window and Crucial Conversation frameworks for providing feedback to struggling students DOI
Neil Mehta, Craig Nielsen,

Amy Zack

и другие.

Medical Teacher, Год журнала: 2025, Номер unknown, С. 1 - 3

Опубликована: Янв. 9, 2025

Feedback plays a crucial role in the growth and development of trainees, particularly when addressing areas needing improvement. However, faculty members often struggle to deliver constructive feedback, discussing underperformance. A key obstacle is lack comfort many experience providing feedback that fosters growth. Traditional programs designed address these challenges can be expensive too time-intensive, for busy clinicians.. Generative AI, specifically custom GPT models simulating virtual students coaches, offers promising solution training. These AI-driven tools simulate realistic scenarios using widely accepted educational frameworks coach on best practices delivering feedback. Through interactive, low-cost, accessible simulations, practice safe environment receive immediate, tailored coaching. This approach enhances confidence competence while reducing logistical financial constraints traditional programs. By scalable, on-demand training, GPT-based simulations seamlessly integrated into clinical environments, fostering supportive culture prioritizing trainee development. paper describes stepwise process design implementation, GPT-powered training based an framework. has potential transform medical education.

Язык: Английский

Процитировано

0

Evaluación cualitativa de modelos de inteligencia artificial generativa para resolución de preguntas clínicas de rehabilitación infantil DOI Open Access

Daniela García Palomer,

MAURICIO ARRETX SPOERER

Rehabilitación Integral, Год журнала: 2025, Номер 18(1), С. 19 - 32

Опубликована: Янв. 16, 2025

Introduction: The use of artificial intelligence in the healthcare field has shown multiple potentials. Among them, using generative models, such as ChatGPT, to support creation clinical documents like reviews, guidelines, or protocols. objective this study is evaluate and compare responses questions defined by experts on two rehabilitation topics, provided most widely used AI models market, order analyze their role documents, guidelines Material Patients Methods: Qualitative descriptive study. Through prompts various types questions, an expert evaluates rates, a scale design for study, 5 models: Gemini, Claude, Perplexity, customized GPT. Results: All are capable delivering well-structured coherent responses, but with some shortcomings technical content updates according evidence-based medicine. ChatGPT received highest ratings, rating scale. Discussion: Generative can play protocols, providing quick effective tool. However, each step these should be supervised human identify possible errors, hallucinations protection ethic security issues.

Язык: Английский

Процитировано

0

Accuracy, satisfaction, and impact of custom GPT in acquiring clinical knowledge: Potential for AI-assisted medical education DOI
Jiaxi Pu,

Jie Hong,

Yu Qiao

и другие.

Medical Teacher, Год журнала: 2025, Номер unknown, С. 1 - 7

Опубликована: Фев. 2, 2025

Recent advancements in artificial intelligence (AI) have enabled the customization of large language models to address specific domains such as medical education. This study investigates practical performance a custom GPT model enhancing clinical knowledge acquisition for students and physicians. A was developed by incorporating latest readily available teaching resources. Its accuracy providing evaluated using set questions, responses were compared against established guidelines. Satisfaction assessed through surveys involving physicians at different stages from various types hospitals. The impact further comparing its role facilitating with traditional learning methods. demonstrated higher (83.6%) general AI (65.5%, 69.1%) comparable professionally (Glass Health, 83.6%). Residents reported highest satisfaction clerks physicians, citing improved independence, motivation, confidence (p < 0.05). Physicians, especially those hospitals, showed greater eagerness develop residents analysis revealed that achieved better test scores resources 0.05), though fewer perfect obtained. demonstrates significant promise an innovative tool advancing education, particularly residents. capability deliver accurate, tailored information complements methods, aiding educators promoting personalized consistent training. However, it is essential both learners remain critical evaluating AI-generated information. With continued development thoughtful integration, tools like GPTs potential significantly enhance quality accessibility education.[Box: see text].

Язык: Английский

Процитировано

0