
Frontiers in Artificial Intelligence, Journal Year: 2025, Volume and Issue: 8
Published: May 21, 2025
Language: Английский
Frontiers in Artificial Intelligence, Journal Year: 2025, Volume and Issue: 8
Published: May 21, 2025
Language: Английский
Medical Science Educator, Journal Year: 2025, Volume and Issue: unknown
Published: April 26, 2025
Language: Английский
Citations
1Forum for education studies., Journal Year: 2025, Volume and Issue: 3(2), P. 2730 - 2730
Published: April 1, 2025
This paper explores the transformative role of artificial intelligence (AI) in medical education, emphasizing its as a pedagogical tool for technology-enhanced learning. highlights AI’s potential to enhance learning process various inquiry-based strategies and support Competency-Based Medical Education (CBME) by generating high-quality assessment items with automated personalized feedback, analyzing data from both human supervisors AI, helping predict future professional behavior current trainees. It also addresses inherent challenges limitations using AI student assessment, calling guidelines ensure valid ethical use. Furthermore, integration into virtual patient (VP) technology offer experiences encounters significantly enhances interactivity realism overcoming conventional VPs. Although incorporating chatbots VPs is promising, further research warranted their generalizability across clinical scenarios. The discusses preferences Generation Z learners suggests conceptual framework on how integrate teaching supporting learning, aligning needs today’s students utilizing adaptive capabilities AI. Overall, this areas education where can play pivotal roles overcome educational offers perspectives developments education. calls advance theory practice tools innovate practices tailored understand long-term impacts AI-driven environments.
Language: Английский
Citations
0Frontiers in Education, Journal Year: 2025, Volume and Issue: 10
Published: May 1, 2025
Background In the recent generative artificial intelligence (genAI) era, health sciences students (HSSs) are expected to face challenges regarding their future roles in healthcare. This multinational cross-sectional study aimed confirm validity of novel FAME scale examining themes Fear, Anxiety, Mistrust, and Ethical issues about genAI. The also explored extent apprehension among HSSs genAI integration into careers. Methods was based on a self-administered online questionnaire distributed using convenience sampling. survey instrument scale, while toward assessed through modified State-Trait Anxiety Inventory (STAI). Exploratory confirmatory factor analyses were used construct scale. Results final sample comprised 587 mostly from Jordan (31.3%), Egypt (17.9%), Iraq (17.2%), Kuwait (14.7%), Saudi Arabia (13.5%). Participants included studying medicine (35.8%), pharmacy (34.2%), nursing (10.7%), dentistry (9.5%), medical laboratory (6.3%), rehabilitation (3.4%). Factor analysis confirmed reliability Of constructs, Mistrust scored highest, followed by Ethics. participants showed generally neutral genAI, with mean score 9.23 ± 3.60. multivariate analysis, significant variations observed previous ChatGPT use, faculty, nationality, expressing highest level apprehension, Kuwaiti lowest. Previous use correlated lower levels. higher agreement Ethics constructs statistically associations apprehension. Conclusion revealed notable Arab HSSs, which highlights need for educational curricula that blend technological proficiency ethical awareness. Educational strategies tailored discipline culture needed ensure job security competitiveness an AI-driven future.
Language: Английский
Citations
0Medical Teacher, Journal Year: 2025, Volume and Issue: unknown, P. 1 - 6
Published: May 4, 2025
The validation of multiple-choice question (MCQ)-based assessments typically requires administration to a test population, which is resource-intensive and practically demanding. Large language models (LLMs) are promising tool aid in many aspects assessment development, including the challenge determining psychometric properties items. This study investigated whether LLMs could predict difficulty point biserial indices MCQs, potentially alleviating need for preliminary analysis population. Sixty MCQs developed by subject matter experts anesthesiology were presented one hundred times each five different (ChatGPT-4o, o1-preview, Claude 3.5 Sonnet, Grok-2, Llama 3.2) clinical fellows. Response patterns analyzed, (proportion correct responses) (item-test score correlation) calculated. Spearman correlation coefficients used compare between Marked differences response observed among LLMs: ChatGPT-4o, Grok-2 showed variable responses across trials, while Sonnet 3.2 gave consistent responses. outperformed fellows with mean scores 58% 85% compared 57% Three weak fellow (r = 0.28-0.29), two highest scoring no correlation. No LLM predicted indices. These findings suggest have limited utility predicting MCQ performance metrics. Notably, higher-scoring less human performance, suggesting that as become more powerful, their ability may decrease. Understanding consistency an LLM's pattern critical both research methodology practical applications development. Future work should focus on leveraging language-processing capabilities overall optimization (e.g., inter-item rather than item characteristics.
Language: Английский
Citations
0Medical Teacher, Journal Year: 2025, Volume and Issue: unknown, P. 1 - 9
Published: May 17, 2025
Large language models (LLMs) show promise in medical education. This study examines LLMs' ability to score post-encounter notes (PNs) from Objective Structured Clinical Examinations (OSCEs) using an analytic rubric. The goal was evaluate and refine methods for accurate, consistent scoring. Seven LLMs scored five PNs representing varying levels of performance, including intentionally incorrect PN. An iterative experimental design tested different prompting strategies temperature settings, a parameter controlling LLM response creativity. Scores were compared expected rubric-based results. Consistently accurate scoring required multiple rounds prompt refinement. Simple led high variability, which improved with structured approaches low-temperature settings. occasionally made errors calculating total scores, necessitating external calculation. final approach yielded consistently scores across all models. can reliably apply rubrics careful engineering process illustrates their potential as scalable, automated tools education, though further research is needed explore use holistic rubrics. These findings demonstrate the utility assessment practices.
Language: Английский
Citations
0Electronics, Journal Year: 2025, Volume and Issue: 14(11), P. 2098 - 2098
Published: May 22, 2025
To address the need for comprehensive terminology construction in rapidly evolving domains such as blockchain, this study examines how large language models (LLMs), particularly GPT, enhance automatic term extraction through human feedback. The experimental part involves 60 bachelor’s students interacting with GPT a three-step iterative prompting process: initial prompt formulation, intermediate refinement, and final adjustment. At each step, students’ prompts are evaluated by teacher using structured rubric based on 6C criteria (clarity, complexity, coherence, creativity, consistency, contextuality), their summed scores forming an overall grade. analysis indicates that (1) grades correlate GPT’s performance across all steps, reaching highest correlation (0.87) at Step 3; (2) importance of varies e.g., clarity creativity most crucial initially, while coherence consistency influence subsequent refinements, contextuality having no effect steps; (3) linguistic accuracy formulations significantly outweighs domain-specific factual content influencing performance. These findings suggest has robust foundational understanding blockchain terminology, making clear, consistent, linguistically more effective than contextual explanations extraction.
Language: Английский
Citations
0Frontiers in Artificial Intelligence, Journal Year: 2025, Volume and Issue: 8
Published: May 21, 2025
Language: Английский
Citations
0