Generative AI in Higher Art Education DOI
Xi Chen,

Yuebin Liao,

Wei Yu

и другие.

Опубликована: Апрель 19, 2024

Язык: Английский

AI for chemistry teaching: responsible AI and ethical considerations DOI Creative Commons
Ron Blonder, Yael Feldman-Maggor

Chemistry Teacher International, Год журнала: 2024, Номер unknown

Опубликована: Окт. 15, 2024

Abstract This paper discusses the ethical considerations surrounding generative artificial intelligence (GenAI) in chemistry education, aiming to guide teachers toward responsible AI integration. GenAI, driven by advanced models like Large Language Models, has shown substantial potential generating educational content. However, this technology’s rapid rise brought forth concerns regarding general and use that require careful attention from educators. The UNESCO framework on GenAI education provides a comprehensive controversies around considerations, emphasizing human agency, inclusion, equity, cultural diversity. Ethical issues include digital poverty, lack of national regulatory adaptation, content without consent, unexplainable used generate outputs, AI-generated polluting internet, understanding real world, reducing diversity opinions, further marginalizing already marginalized voices deep fakes. delves into these eight controversies, presenting relevant examples stress need evaluate critically. emphasizes importance relating teachers’ pedagogical knowledge argues usage must integrate insights prevent propagation biases inaccuracies. conclusion stresses necessity for teacher training effectively ethically employ practices.

Язык: Английский

Процитировано

5

ChatGPT as a tool for self-learning English among EFL learners: A multi-methods study DOI

Nguyen Hoang Mai Tram,

Tin Trung Nguyen, Cong Duc Tran

и другие.

System, Год журнала: 2024, Номер 127, С. 103528 - 103528

Опубликована: Окт. 28, 2024

Язык: Английский

Процитировано

5

From GPT-3.5 to GPT-4.o: A Leap in AI’s Medical Exam Performance DOI Creative Commons
Markus Kipp

Information, Год журнала: 2024, Номер 15(9), С. 543 - 543

Опубликована: Сен. 5, 2024

ChatGPT is a large language model trained on increasingly datasets to perform diverse language-based tasks. It capable of answering multiple-choice questions, such as those posed by medical examinations. has been generating considerable attention in both academic and non-academic domains recent months. In this study, we aimed assess GPT’s performance anatomical questions retrieved from licensing examinations Germany. Two different versions were compared. GPT-3.5 demonstrated moderate accuracy, correctly 60–64% the autumn 2022 spring 2021 exams. contrast, GPT-4.o showed significant improvement, achieving 93% accuracy exam 100% exam. When tested 30 unique not available online, maintained 96% rate. Furthermore, consistently outperformed students across six state exams, with statistically mean score 95.54% compared students’ 72.15%. The study demonstrates that outperforms its predecessor, GPT-3.5, cohort students, indicating potential powerful tool education assessment. This improvement highlights rapid evolution LLMs suggests AI could play an important role supporting enhancing training, potentially offering supplementary resources for professionals. However, further research needed limitations practical applications systems real-world practice.

Язык: Английский

Процитировано

4

Differences in User Perception of Artificial Intelligence-Driven Chatbots and Traditional Tools in Qualitative Data Analysis DOI Creative Commons
Boštjan Šumak, Maja Pušnik, Ines Kožuh

и другие.

Applied Sciences, Год журнала: 2025, Номер 15(2), С. 631 - 631

Опубликована: Янв. 10, 2025

Qualitative data analysis (QDA) tools are essential for extracting insights from complex datasets. This study investigates researchers’ perceptions of the usability, user experience (UX), mental workload, trust, task complexity, and emotional impact three tools: Taguette 1.4.1 (a traditional QDA tool), ChatGPT (GPT-4, December 2023 version), Gemini (formerly Google Bard, version). Participants (N = 85), Master’s students Faculty Electrical Engineering Computer Science with prior in UX evaluations familiarity AI-based chatbots, performed sentiment annotation tasks using these tools, enabling a comparative evaluation. The results show that AI were associated lower cognitive effort more positive responses compared to Taguette, which caused higher frustration especially during cognitively demanding tasks. Among achieved highest usability score (SUS 79.03) was rated positively engagement. Trust levels varied, preferred accuracy confidence. Despite differences, all consistently identifying qualitative patterns. These findings suggest AI-driven can enhance experiences while emphasizing need align tool selection specific preferences.

Язык: Английский

Процитировано

0

Students’ perceptions about the opportunities and challenges of ChatGPT in higher education: a cross-sectional survey based in China DOI
Xi Cao, Yu‐Jia Lin, Jiahui Zhang

и другие.

Education and Information Technologies, Год журнала: 2025, Номер unknown

Опубликована: Янв. 10, 2025

Язык: Английский

Процитировано

0

Exploring ChatGPT’s role in English grammar learning: A Kolb model perspective DOI
Nagaletchimee Annamalai, Brandford Bervell

Innovations in Education and Teaching International, Год журнала: 2025, Номер unknown, С. 1 - 17

Опубликована: Янв. 10, 2025

This study investigated the application of Kolb model to assess efficacy ChatGPT in enhancing English grammar learning. Data were gathered through interviews and observations. By analysing data across model's stages - concrete experience, reflective observation, abstract conceptualisation, active experimentation both strengths weaknesses become apparent. The results indicated that while encourages interactive learning enthusiasm among students, there is a prevailing doubt regarding its accuracy, underscoring necessity maintaining critical mindset towards AI-generated content. Participants emphasised ChatGPT's supportive role education, aiding understanding applying concepts. However, concerns occasional inaccuracies struggles understand contextual nuances are observed. highlights importance human involvement AI tools need for students develop technological literacy. Furthermore, it proposes pedagogical effectively utilising education

Язык: Английский

Процитировано

0

Can ChatGPT Solve Undergraduate Exams from Warehousing Studies? An Investigation DOI Creative Commons
Sven Franke, Christoph Pott, Jérôme Rutinowski

и другие.

Computers, Год журнала: 2025, Номер 14(2), С. 52 - 52

Опубликована: Фев. 5, 2025

The performance of Large Language Models, such as ChatGPT, generally increases with every new model release. In this study, we investigated to what degree different GPT models were able solve the exams three undergraduate courses on warehousing. We contribute discussion ChatGPT’s existing logistics knowledge, particularly in field Both free version (GPT-4o mini) and premium (GPT-4o) completed warehousing using prompting techniques (with without role assignments experts or students). o1-preview was also used (without a assignment) for six runs. tests repeated times. A total 60 conducted compared in-class results students. show that passed 46 tests. best run solved 93% exam correctly. Compared students from respective semester, ChatGPT outperformed one exam. other two exams, performed better average than ChatGPT.

Язык: Английский

Процитировано

0

Utilizing Large Language Models for Educating Patients About Polycystic Ovary Syndrome in China: A Two-Phase Study (Preprint) DOI Creative Commons

X. Chen

Опубликована: Фев. 17, 2025

BACKGROUND Polycystic ovary syndrome (PCOS) is a prevalent condition requiring effective patient education, particularly in China. Large language models (LLMs) present promising avenue for this. This two-phase study evaluates six LLMs educating Chinese patients about PCOS. It assesses their capabilities answering questions, interpreting ultrasound images, and providing instructions within real-world clinical setting OBJECTIVE systematically evaluated gigantic models—Gemini 2.0 Pro, OpenAI o1, ChatGPT-4o, ChatGPT-4, ERINE 4.0, GLM-4—for use gynecological medicine. assessed performance several areas: questions from the Gynecology Qualification Examination, understanding coping with polycystic cases, writing instructions, helping to solve problems. METHODS A two-step evaluation method was used. Primarily, they tested frameworks on 136 exam 36 images. They then compared results those of medical students residents. Six gynecologists framework's responses 23 PCOS-related using Likert scale, readability tool used review content objectively. In following process, 40 PCOS two central systems, Gemini Pro o1. them terms satisfaction, text readability, professional evaluation. RESULTS During initial phase testing, o1 demonstrated impressive accuracy specialist achieving rates 93.63% 92.40%, respectively. Additionally, image diagnostic tasks noteworthy, an 69.44% reaching 53.70%. Regarding response significantly outperformed other accuracy, completeness, practicality, safety. However, its were notably more complex (average score 13.98, p = 0.003). The second-phase revealed that excelled (patient rating 3.45, < 0.01; physician 3.35, 0.03), surpassing 2.65, 2.90). slightly lagged behind completeness (3.05 vs. 3.50, 0.04). CONCLUSIONS reveals large have considerable potential address issues faced by PCOS, which are capable accurate comprehensive responses. Nevertheless, it still needs be strengthened so can balance clarity comprehensiveness. addition, big besides analyzing especially ability handle regulation categories, improved meet practice. CLINICALTRIAL None

Язык: Английский

Процитировано

0

On Continually Tracing Origins of LLM-Generated Text and Its Application in Detecting Cheating in Student Coursework DOI Creative Commons
Quan Wang, Haoran Li

Big Data and Cognitive Computing, Год журнала: 2025, Номер 9(3), С. 50 - 50

Опубликована: Фев. 20, 2025

Large language models (LLMs) have demonstrated remarkable capabilities in text generation, which also raise numerous concerns about their potential misuse, especially educational exercises and academic writing. Accurately identifying tracing the origins of LLM-generated content is crucial for accountability transparency, ensuring responsible use LLMs environments. Previous methods utilize binary classifiers to discriminate whether a piece was written by human or generated specific LLM employ multi-class trace source from fixed set. These methods, however, are restricted one several pre-specified cannot generalize new LLMs, continually emerging. This study formulates class-incremental learning (CIL) fashion, where emerge, model incrementally learns identify without forgetting old ones. A training-free continual method further devised task, idea extract prototypes emerging using frozen encoder, then perform origin via prototype matching after delicate decorrelation process. For evaluation, two datasets constructed, English Chinese. simulate scenario six emerge over time used generate student essays, an detector has expand its recognition scope as appear. Experimental results show that proposed achieves average accuracy 97.04% on dataset 91.23% Chinese dataset. validate feasibility verify effectiveness detecting cheating coursework.

Язык: Английский

Процитировано

0

Applying IRT to Distinguish Between Human and Generative AI Responses to Multiple-Choice Assessments DOI

Alona Strugatski,

Giora Alexandron

Опубликована: Фев. 21, 2025

Язык: Английский

Процитировано

0