Exploring the Predictors of AI Chatbot Usage Intensity Among Students: Within- and Between-Person Relationships Using the Technology Acceptance Model DOI Creative Commons
Anne‐Kathrin Kleine, Insa Schaffernak, Eva Lermer

и другие.

Computers in Human Behavior Artificial Humans, Год журнала: 2024, Номер unknown, С. 100113 - 100113

Опубликована: Дек. 1, 2024

Язык: Английский

Preparing for Artificial General Intelligence (AGI) in Health Professions Education: AMEE Guide No. 172 DOI Creative Commons
Ken Masters, Anne Herrmann–Werner, Teresa Festl‐Wietek

и другие.

Medical Teacher, Год журнала: 2024, Номер 46(10), С. 1258 - 1271

Опубликована: Авг. 8, 2024

Generative Artificial Intelligence (GenAI) caught Health Professions Education (HPE) institutions off-guard, and they are currently adjusting to a changed educational environment. On the horizon, however, is

Язык: Английский

Процитировано

10

AI-powered standardised patients: evaluating ChatGPT-4o’s impact on clinical case management in intern physicians DOI Creative Commons
Selcen Öncü, Fulya Torun, Hilal Hatice Ülkü

и другие.

BMC Medical Education, Год журнала: 2025, Номер 25(1)

Опубликована: Фев. 20, 2025

Artificial Intelligence is currently being applied in healthcare for diagnosis, decision-making and education. ChatGPT-4o, with its advanced language problem-solving capabilities, offers an innovative alternative as a virtual standardised patient clinical training. Intern physicians are expected to develop case management skills such problem-solving, reasoning crisis management. In this study, ChatGPT-4o's served medical interns on This study aimed evaluate intern physicians' competencies management; reasoning, explore the impact potential of ChatGPT-4o viable tool assessing these competencies. used simultaneous triangulation design, integrating quantitative qualitative data. Conducted at Aydın Adnan Menderes University, 21 sixth-year students, simulated realistic interactions requiring Data were gathered through self-assessment survey, semi-structured interviews, observations students during process. Analyses included Pearson correlation, Chi-square, Kruskal-Wallis tests, content analysis conducted data using MAXQDA software coding. According findings, observation survey scores positively correlated. There was significant gap between participants' actual performance, indicating discrepancies self-perceived versus real competence. Participants reported feeling inadequate their experienced time pressure. They satisfied Intelligence-powered process willing continue similar practices. engaged uniform experience. Although participants satisfied, application sometimes negatively affected due disconnection problems processing challenges. successfully interactions, providing controlled environment without risking harm patients practicing some technological challenges limited effectiveness, it useful, cost-effective accessible. It thought that will be better supported acquiring varied scenarios method. Not applicable.

Язык: Английский

Процитировано

1

ChatGPT’s Performance on Portuguese Medical Examination Questions: Comparative Analysis of ChatGPT-3.5 Turbo and ChatGPT-4o Mini DOI Creative Commons
Filipe Prazeres

JMIR Medical Education, Год журнала: 2025, Номер 11, С. e65108 - e65108

Опубликована: Март 5, 2025

Advancements in ChatGPT are transforming medical education by providing new tools for assessment and learning, potentially enhancing evaluations doctors improving instructional effectiveness. This study evaluates the performance consistency of ChatGPT-3.5 Turbo ChatGPT-4o mini solving European Portuguese examination questions (2023 National Examination Access to Specialized Training; Prova Nacional de Acesso à Formação Especializada [PNA]) compares their human candidates. was tested on first part (74 questions) July 18, 2024, second 19, 2024. Each model generated an answer using its natural language processing capabilities. To test consistency, each asked, "Are you sure?" after answer. Differences between responses were analyzed McNemar with continuity correction. A single-parameter t compared models' Frequencies percentages used categorical variables, means CIs numerical variables. Statistical significance set at P<.05. achieved accuracy rate 65% (48/74) 2023 PNA examination, surpassing Turbo. outperformed candidates, while had a more moderate performance. highlights advancements potential models education, emphasizing need careful implementation teacher oversight further research.

Язык: Английский

Процитировано

1

ChatGPT in education: unveiling frontiers and future directions through systematic literature review and bibliometric analysis DOI
Buddhini Amarathunga

Asian Education and Development Studies, Год журнала: 2024, Номер 13(5), С. 412 - 431

Опубликована: Июль 7, 2024

Purpose This is a dual-focused study that anticipates qualitatively and quantitatively examining the literature on recently initiated revolutionizing concept of ChatGPT in education by performing Systematic Literature Review (SLR) bibliometric analysis. Current analyzed eight research questions: (1) main information annual scientific publications education, (2) pioneer authors collaborative exploring (3) authors' productivity through Lotka’s Law Authors’ Scientific Productivity, (4) most pertinent sources how are clustered Bradford’s Scattering, (5) related, cited countries nature international collaborations (6) relevant (7) occurring trending keywords empirical studies (8) themes areas for future investigations education. Design/methodology/approach The current was designed as SLR analysis, extracting articles from Scopus database utilizing both Biblioshiny VOSviewer software advanced mapping visualizations via quantitative qualitative analysis approaches. Findings results indicated progressively evolving worldwide generating 45 2023 to 2024 (May). USA, China, Indonesia productive have published systems, AI, students, educational computing, human experiments, teaching, status, chatbots, generative academic integrity, technology, technology acceptance directions field Originality/value analysis’s outcomes will enhance area with theoretical practical implications benefit teachers, policymakers, regulators higher sectors, government, general public effective utilization

Язык: Английский

Процитировано

7

Evaluating the performance of ChatGPT-3.5 and ChatGPT-4 on the Taiwan plastic surgery board examination DOI Creative Commons
Ching‐Hua Hsieh,

Hsiao-Yun Hsieh,

Hui‐Ping Lin

и другие.

Heliyon, Год журнала: 2024, Номер 10(14), С. e34851 - e34851

Опубликована: Июль 1, 2024

Язык: Английский

Процитировано

5

Research ethics and issues regarding the use of ChatGPT-like artificial intelligence platforms by authors and reviewers: a narrative review DOI Creative Commons
Sang-Jun Kim

Science Editing, Год журнала: 2024, Номер 11(2), С. 96 - 106

Опубликована: Авг. 20, 2024

While generative artificial intelligence (AI) technology has become increasingly competitive since OpenAI introduced ChatGPT, its widespread use poses significant ethical challenges in research. Excessive reliance on tools like ChatGPT may intensify concerns scholarly articles. Therefore, this article aims to provide a comprehensive narrative review of the issues associated with using AI academic writing and inform researchers current trends. Our methodology involved detailed examination literature related research We conducted searches major databases identify additional relevant articles cited literature, from which we collected analyzed papers. identified categorized into problems faced by authors nonacademic platforms detection acceptance AI-generated content reviewers editors. explored eight specific highlighted thorough five key topics ethics. Given that often do not disclose their training data sources, there is substantial risk unattributed plagiarism. must verify accuracy authenticity before incorporating it article, ensuring adherence principles integrity ethics, including avoidance fabrication, falsification,

Язык: Английский

Процитировано

5

The Promise of ChatGPT in Medical Education: a systematic review (Preprint) DOI Creative Commons
Peiyuan Tang,

Rongchi Xiao,

Yangbin Cao

и другие.

Опубликована: Янв. 2, 2025

UNSTRUCTURED Purpose: This systematic review examines the potential of ChatGPT as a tool in medical education, focusing on its role enhancing learning experiences, student performance, and critical thinking skills. ChatGPT's integration aims to address shortage faculty resources create personalized, interactive experiences for students. Methods: Following PRISMA AMSTAR guidelines, we conducted across four databases (Embase, PubMed, Web Science, Cochrane Library) up October 2024. Data from seven studies various disciplines educational levels were included, analyzed descriptively, evaluated quality. Results: Seven demonstrated that ChatGPT-assisted education improves academic clinical skills, SDL capabilities. Notably, students using showed higher scores short-term assessments final exams. 4.0, compared version 3.5, provided enhanced case generation communication skills training. Additionally, ChatGPT-supported boosted students' SDL, thinking, engagement levels, while helping educators manage instructional workload. Conclusion: study highlights ChatGPT’s strong significantly self-directed learning, thinking. It underscores personalized supporting development essential competencies. 4.0 outperforms 3.5 with improved abilities.

Язык: Английский

Процитировано

0

UsmleGPT: An AI application for developing MCQs via multi-agent system DOI Open Access

Zhehan Jiang,

S. H. Feng

Software Impacts, Год журнала: 2025, Номер 23, С. 100742 - 100742

Опубликована: Март 1, 2025

Язык: Английский

Процитировано

0

Comparative Evaluation of Large Language Models for Medical Education: Performance Analysis in Urinary System Histology. DOI Creative Commons
Anikó Szabó, Ghasem Dolatkhah Laein

Research Square (Research Square), Год журнала: 2025, Номер unknown

Опубликована: Март 13, 2025

Abstract Large language models (LLMs) show potential for medical education, but their domain-specific capabilities need systematic evaluation. This study presents a comparative assessment of thirteen LLMs in urinary system histology education. Using multi-dimensional framework, we evaluated across two tasks: answering 65 validated multiple-choice questions (MCQs) and generating clinical scenarios with items. For MCQ performance, assessed accuracy along explanation quality through relevance comprehensiveness metrics. scenario generation, Quality, Complexity, Relevance, Correctness, Variety dimensions. Performance varied substantially tasks, ChatGPT-o1 achieving highest (96.31 ± 17.85%) Claude-3.5 demonstrating superior generation (91.4% maximum possible score). All significantly outperformed random guessing large effect sizes. Statistical analyses revealed significant differences consistency multiple attempts dimensional most showing higher Correctness than Quality scores generation. Term frequency analysis content imbalances all models, overemphasis certain anatomical structures complete omission others. Our findings demonstrate that while considerable promise reliable implementation requires matching specific to appropriate educational implementing verification mechanisms, recognizing current limitations pedagogically balanced content.

Язык: Английский

Процитировано

0

Integrating artificial intelligence into pre-clinical medical education: challenges, opportunities, and recommendations DOI Creative Commons

Birgit Pohn,

Lars Mehnen, Sebastian Fitzek

и другие.

Frontiers in Education, Год журнала: 2025, Номер 10

Опубликована: Март 26, 2025

Процитировано

0