Adoption of Generative Large Language Models in Pathology: A National Survey of Chinese Pathologists (Preprint) DOI Creative Commons
Peng Xue,

Yuting Wang,

Victor Yu Cui

et al.

Published: Oct. 19, 2024

BACKGROUND Pathologists are grappling with high workloads and uneven resource distribution, which can impede professional development the delivery of quality patient care. The advent generative large language models (LLMs) has potential to revolutionize pathological field, where efficiency accessibility paramount. OBJECTIVE This study aimed investigate perceptions willingness Chinese pathologists adopt LLMs. METHODS We conducted a questionnaire survey at National Pathology Academic Annual Conference in April 2024, involving 339 certified pathologists. Participant responses were measured 5-point Likert scale for performance LLMs clinical, research, educational settings, statistical analysis using mean standard deviation (SD). Multivariable logistic regression employed explore factors associated adoption LLMs, reporting odds ratios (ORs) 95% confidence intervals (CIs). RESULTS A total valid questionnaires returned. results revealed that generally supported clinical (mean 3.87, SD 0.96), research 3.88, 1.09), 4.04, 0.82) contexts. Positive attitudes towards use prevalent. Notably, practicing less developed urban areas (OR=1.99, CI=1.07 3.69, p=0.030), those higher caseloads (>5000 cases/year; OR=2.12, CI=1.01 4.44, p=0.047), engaged (OR=2.94, CI=1.61 5.34, p<0.001) teaching (OR=2.37, CI=1.42 3.96, p=0.001) activities, as well prior experience (OR=2.45, CI=1.38 4.37, p=0.002), showed greater inclination future adoption. CONCLUSIONS receptive showing positive their application. advocates fostering improve accuracy diagnosis, reduce burden on pathologists, overall service level field pathology.

Language: Английский

Artificial Intelligence in Health Professions Education assessment: AMEE Guide No. 178 DOI
Ken Masters, Heather MacNeill, Jennifer Benjamin

et al.

Medical Teacher, Journal Year: 2025, Volume and Issue: unknown, P. 1 - 15

Published: Jan. 9, 2025

Health Professions Education (HPE) assessment is being increasingly impacted by Artificial Intelligence (AI), and institutions, educators, learners are grappling with AI's ever-evolving complexities, dangers, potential. This AMEE Guide aims to assist all HPE stakeholders helping them navigate the uncertainty before them. Although impetus AI, grounds its path in pedagogical theory, considers range of human responses, then deals types, challenges, AI roles as tutor learner, required competencies. It discusses difficult ethical issues, ending considerations for faculty development technicalities acknowledgment assessment. Through this Guide, we aim allay fears face change demonstrate possibilities that will allow educators harness full potential

Language: Английский

Citations

7

The Impact of Generative AI on Essay Revisions and Student Engagement DOI Creative Commons
Noble Lo,

Alan Wong,

S. T. Chan

et al.

Computers and Education Open, Journal Year: 2025, Volume and Issue: unknown, P. 100249 - 100249

Published: March 1, 2025

Language: Английский

Citations

4

Large language models improve clinical decision making of medical students through patient simulation and structured feedback: a randomized controlled trial DOI Creative Commons

Emilia Brügge,

Sarah Ricchizzi,

Malin Arenbeck

et al.

BMC Medical Education, Journal Year: 2024, Volume and Issue: 24(1)

Published: Nov. 28, 2024

Language: Английский

Citations

3

The feasibility of using generative artificial intelligence for history taking in virtual patients DOI Creative Commons
Yongjin Yi, Kyong‐Jee Kim

BMC Research Notes, Journal Year: 2025, Volume and Issue: 18(1)

Published: Feb. 24, 2025

This study aimed to design and develop a virtual patient program using generative Artificial Intelligence (AI) technology, providing medical students opportunities practice history-taking with chatbot. We evaluated the feasibility of this approach by analyzing quality responses generated Five expert reviewers participated in pilot test, interacting chatbot take history presenting urinary problem Korean AI platform Naver HyperCLOVA X®. They five-item questionnaire rated on five-point Likert scale. The 96 pairs questions answers, totaling 1,325 words 177 sentences. Discourse analysis scripts revealed that 2.6% (34) were deemed implausible categorized into inarticulate hallucinations, missing important information. Participants answers as relevant (M = 4.50 ± 0.32), valid 4.20 0.40), accurate 4.10 0.20), succinct 3.80 0.51), but neutral about their fluency 3.20 0.60). Using for patients is feasible, improvements are needed more articulate natural responses.

Language: Английский

Citations

0

AI agent as a simulated patient for history-taking training in clinical clerkship: an example in stomatology DOI Creative Commons

Yongxiang Yuan,

Jieyu He,

Fang Wang

et al.

Global Medical Education, Journal Year: 2025, Volume and Issue: unknown

Published: March 5, 2025

Abstract Objective This study developed an AI-powered chatbot simulating a patient with acute pulpitis to enhance history-taking training in stomatology, aiming at providing cost-effective tool that improves diagnostic and communication skills while fostering clinical competence empathy. Methods The involved 126 undergraduate medicine students who interacted AI agent suffering pulpitis. was created optimized five-step process, including preliminary creation, usability testing Chatbot Usability Questionnaire (CUQ), analysis optimization, retesting, comparison of pre- post-optimization results. platform used ChatGLM, statistical performed using R software. Results pre-optimization group’s CUQ mean score 64.2, indicating moderate satisfaction. After the improved 79.3, showing significantly higher Improvements were noted all aspects, particularly chatbot’s personality, user experience, error handling, onboarding. Conclusion effectively addresses challenges training, improving realism, engagement, accessibility diverse scenarios. It demonstrates potential chatbots as valuable tools for enhancing medical education.

Language: Английский

Citations

0

Technology-enhanced learning in medical education in the age of artificial intelligence DOI
Kyong‐Jee Kim

Forum for education studies., Journal Year: 2025, Volume and Issue: 3(2), P. 2730 - 2730

Published: April 1, 2025

This paper explores the transformative role of artificial intelligence (AI) in medical education, emphasizing its as a pedagogical tool for technology-enhanced learning. highlights AI’s potential to enhance learning process various inquiry-based strategies and support Competency-Based Medical Education (CBME) by generating high-quality assessment items with automated personalized feedback, analyzing data from both human supervisors AI, helping predict future professional behavior current trainees. It also addresses inherent challenges limitations using AI student assessment, calling guidelines ensure valid ethical use. Furthermore, integration into virtual patient (VP) technology offer experiences encounters significantly enhances interactivity realism overcoming conventional VPs. Although incorporating chatbots VPs is promising, further research warranted their generalizability across clinical scenarios. The discusses preferences Generation Z learners suggests conceptual framework on how integrate teaching supporting learning, aligning needs today’s students utilizing adaptive capabilities AI. Overall, this areas education where can play pivotal roles overcome educational offers perspectives developments education. calls advance theory practice tools innovate practices tailored understand long-term impacts AI-driven environments.

Language: Английский

Citations

0

Artificial Intelligence in Medical Education: A Practical Guide for Educators DOI Creative Commons

Nivritti G. Patil,

Nga Lok Kou,

Daniel T. Baptista‐Hon

et al.

MedComm – Future Medicine, Journal Year: 2025, Volume and Issue: 4(2)

Published: April 2, 2025

ABSTRACT Artificial intelligence (AI)‐driven learning is transforming education, requiring educators to quickly develop the skills integrate AI tools effectively so they complement rather than replace traditional teaching practices. The fast pace of generative development poses challenges, particularly for less tech‐savvy teachers or those who delay about these tools, leaving them at risk falling behind. This further compounded by students' quick adaptation widely available models such as ChatGPT‐3.5 and Deepseek R1, which increasingly use learning, assignments, assessments. Despite existing discussions on in there a lack practical guidance how medical can responsibly implement teaching. perspective provides guide incorporate their strategies, generate student assessments adapt assignments suitable era. We address challenges data bias, accuracy, ethics, ensuring enhances undermines training when aligned with sound pedagogical principles. review practical, structured approach educators, offering clear recommendations help bridge gap between advancements effective methodologies education.

Language: Английский

Citations

0

Digital and Intelligence Education in Medicine: A Bibliometric and Visualization Analysis Using CiteSpace and VOSviewer DOI
Bing Xiang Yang, Fanqin Zhou, Nan Bai

et al.

Frontiers of digital education., Journal Year: 2025, Volume and Issue: 2(1)

Published: March 1, 2025

Language: Английский

Citations

0

Exploring the Use of a Large Language Model in Simulation Debriefing: An Observational Simulation-Based Pilot Study DOI

Eury Hong,

Sundes Kazmir,

Benjamin Dylik

et al.

Simulation in Healthcare The Journal of the Society for Simulation in Healthcare, Journal Year: 2025, Volume and Issue: unknown

Published: May 13, 2025

Facilitating debriefings in simulation is a complex task with high load. The increasing availability of generative artificial intelligence (AI) offers an opportunity to support facilitators. We explored facilitation and debriefing strategies using large language model (LLM) decrease facilitators' load allow for more comprehensive debrief. This prospective, observational, simulation-based pilot study was conducted at Yale University School Medicine. For each simulation, script generated by passing real-time transcription the case as input GPT-4o LLM. Thereafter, facilitators learners completed surveys workload assessments. primary outcome measured NASA-TLX scale. secondary perception AI technologies survey-based questions. involved four 25 learners, all data being self-reported. All showed strong enthusiasm integration, mean Likert scores 4.75/5 4.0/5, respectively. revealed moderate mental demand (M = .8/21; SD 6.4) 9.9/21; 4.5). perceived help maintain focus 4.8/5), learning objectives 4.2/5), minimize distractions both 4.6/5) teams 4.5/5). highlights LLM integration aiding organizing information. Though reported considerable load, findings suggest that can enhance debrief quality, while there remains continuous need human oversight.

Language: Английский

Citations

0

Artificial Intelligence Virtual Patient: A proof of concept study DOI
Betty S. Chan,

Tricia Dodds,

Jinhui Xiang

et al.

Research Square (Research Square), Journal Year: 2025, Volume and Issue: unknown

Published: May 7, 2025

Abstract Background Artificial Intelligence(AI) is advancing, but its role in simulating detailed patient-doctor interactions the style of Objective Structured Clinical Examinations(OSCEs) emerging. This study's goal was to create and validate an AI virtual patient(AIVP) that could interact with medical students, mimic a patient issue, provide students feedback on their performance. Methods Six AIVP were developed simulate OSCE scenarios for common emergency department presentations. The simulations created using Unity game engine, featuring conversation loop includes speech-to-text conversion (OpenAI Whisper), response generation(Open ChatGPT 4o), speech generation TTS). A tutor AI(ChatGPT 4o) then generates after help improve responses. Final-year given opportunity AIVPs participated pre- post-AIVP assessments evaluate AIVP's effect performance, Wilcoxon paired t-tests used analysis. Students completed Likert Scales surveys educational value technical issues. Results Twenty-one over two weeks total 21.7 hours, averaging 1.1 hour per user. median scores improved from 63/100 (IQR: 53.5–70) 70/100 63-73.5) (p = 0.29). On scale 0 (strongly disagree) 5 agree) there strong agreement valuable learning experience(mean 4.62, SD 0.65). valued provided by at end interaction performance(mean 4.38, 0.84), Technical issues like voice recognition problems, latency interaction, occasional reversals reported. Conclusion novel tool developing history-taking skills found AIVP, it be experience. However, factors realism need further development.

Language: Английский

Citations

0