ChatGPT's Performance Evaluation in Spreadsheets Modelling to Inform Assessments Redesign DOI
Michelle L. F. Cheong

Journal of Computer Assisted Learning, Journal Year: 2025, Volume and Issue: 41(3)

Published: May 5, 2025

ABSTRACT Background Increasingly, students are using ChatGPT to assist them in learning and even completing their assessments, raising concerns of academic integrity loss critical thinking skills. Many articles suggested educators redesign assessments that more ‘Generative‐AI‐resistant’ focus on assessing higher order However, there is a lack attempt quantify at different cognitive levels provide empirical study insights ChatGPT's performance levels, which will affect how assessments. Objectives Educators need new information well performs future assess this paradigm. This paper attempts fill the gap research by spreadsheet modelling tested four prompt engineering settings, knowledge support assessment redesign. Our proposed methodology can be applied other course modules for achieve respective designs actions. Methods We evaluated 3.5 solving spreadsheets questions with multiple linked test items categorised according revised Bloom's taxonomy. compared accuracy settings namely Zero‐Shot‐Baseline (ZSB), Zero‐Shot‐Chain‐of‐Thought (ZSCoT), One‐Shot (OS), One‐Shot‐Chain‐of‐Thought (OSCoT), establish tackled technical each setting, setting effective enhancing level. Results found was good up Level 3 taxonomy ZSB, its decreased as level increased. From 4 onwards, it did not perform well, committing many mistakes. ZSCoT would modest improvements 5, making possible concern instructors. OS very significant Levels 4, while OSCoT needed improvement 5. None prompts able improve response quality 6. Conclusions concluded must cognizant questions, enhanced from suitable prompts. To develop students' abilities, we provided recommendations aim mitigate negative impact student leverage enhance learning, considering levels.

Language: Английский

Integrating Generative Artificial Intelligence Tools and Competencies in Biomedical Engineering Education DOI Creative Commons
Reem Khojah, Alexandra Werth,

Kelly W. Broadhead

et al.

Biomedical Engineering Education, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 25, 2025

Language: Английский

Citations

0

Study on Factors Influencing Primary and Secondary School Teachers’ Acceptance of AI Tools Based on the UTAUT Model: A Case Study of Tianchang City, Anhui Province DOI
Huixuan Xu,

Datuk Dr Yasmin Binti Hussain,

LI Xue-qin

et al.

Journal of Education and Educational Research, Journal Year: 2025, Volume and Issue: 12(3), P. 106 - 110

Published: March 29, 2025

This study investigates the factors influencing primary and secondary school teachers’ acceptance of artificial intelligence (AI) tools in Tianchang City, Anhui Province, using Unified Theory Acceptance Use Technology (UTAUT) model. A quantitative approach was employed, with data collected via a structured questionnaire from 300 teachers Tianchang. The survey measured UTAUT constructs: performance expectancy, effort social influence, facilitating conditions, alongside self-reported AI tool acceptance. Structural equation modeling (SEM) revealed that expectancy (β = 0.45, p < .001) conditions 0.32, .01) were significant predictors acceptance, whereas 0.18, .06) influence 0.14, .13) showed weaker effects. These findings validate UTAUT’s applicability explaining adoption educational settings highlight critical role perceived utility resource accessibility. Regionally, aligns national AI-in-education policies but is shaped by local distribution. Practical implications include enhancing technical support, demonstrating AI’s tangible benefits, tailoring training to reduce barriers. research contributes understanding technology integration Chinese K-12 contexts informs localized strategies for implementation.

Language: Английский

Citations

0

ChatGPT's Performance Evaluation in Spreadsheets Modelling to Inform Assessments Redesign DOI
Michelle L. F. Cheong

Journal of Computer Assisted Learning, Journal Year: 2025, Volume and Issue: 41(3)

Published: May 5, 2025

ABSTRACT Background Increasingly, students are using ChatGPT to assist them in learning and even completing their assessments, raising concerns of academic integrity loss critical thinking skills. Many articles suggested educators redesign assessments that more ‘Generative‐AI‐resistant’ focus on assessing higher order However, there is a lack attempt quantify at different cognitive levels provide empirical study insights ChatGPT's performance levels, which will affect how assessments. Objectives Educators need new information well performs future assess this paradigm. This paper attempts fill the gap research by spreadsheet modelling tested four prompt engineering settings, knowledge support assessment redesign. Our proposed methodology can be applied other course modules for achieve respective designs actions. Methods We evaluated 3.5 solving spreadsheets questions with multiple linked test items categorised according revised Bloom's taxonomy. compared accuracy settings namely Zero‐Shot‐Baseline (ZSB), Zero‐Shot‐Chain‐of‐Thought (ZSCoT), One‐Shot (OS), One‐Shot‐Chain‐of‐Thought (OSCoT), establish tackled technical each setting, setting effective enhancing level. Results found was good up Level 3 taxonomy ZSB, its decreased as level increased. From 4 onwards, it did not perform well, committing many mistakes. ZSCoT would modest improvements 5, making possible concern instructors. OS very significant Levels 4, while OSCoT needed improvement 5. None prompts able improve response quality 6. Conclusions concluded must cognizant questions, enhanced from suitable prompts. To develop students' abilities, we provided recommendations aim mitigate negative impact student leverage enhance learning, considering levels.

Language: Английский

Citations

0