Reacting to Generative AI: Insights from Student and Faculty Discussions on Reddit DOI Open Access
Chuhao Wu, Xinyu Wang, John M. Carroll

et al.

Published: Jan. 18, 2024

Generative Artificial intelligence (GenAI) such as ChatGPT has elicited strong reactions from almost all stakeholders across the education system. Education-oriented and academic social media communities provide an important venue for these to share experiences exchange ideas about GenAI, which is constructive developing human-centered policies. This study examines early user consisting of 725 Reddit threads between 06/2022 05/2023. Through natural language processing (NLP) content analysis, we observe increasingly negative sentiment in discussion identify six main categories student faculty GenAI education. These reflect concerns integrity AI's impact on value traditional Our analysis also highlights additional workload imposed by new technologies. findings suggest that dialogue community critical can mitigate sources tension students faculty.

Language: Английский

AI for chemistry teaching: responsible AI and ethical considerations DOI Creative Commons
Ron Blonder, Yael Feldman-Maggor

Chemistry Teacher International, Journal Year: 2024, Volume and Issue: unknown

Published: Oct. 15, 2024

Abstract This paper discusses the ethical considerations surrounding generative artificial intelligence (GenAI) in chemistry education, aiming to guide teachers toward responsible AI integration. GenAI, driven by advanced models like Large Language Models, has shown substantial potential generating educational content. However, this technology’s rapid rise brought forth concerns regarding general and use that require careful attention from educators. The UNESCO framework on GenAI education provides a comprehensive controversies around considerations, emphasizing human agency, inclusion, equity, cultural diversity. Ethical issues include digital poverty, lack of national regulatory adaptation, content without consent, unexplainable used generate outputs, AI-generated polluting internet, understanding real world, reducing diversity opinions, further marginalizing already marginalized voices deep fakes. delves into these eight controversies, presenting relevant examples stress need evaluate critically. emphasizes importance relating teachers’ pedagogical knowledge argues usage must integrate insights prevent propagation biases inaccuracies. conclusion stresses necessity for teacher training effectively ethically employ practices.

Language: Английский

Citations

5

ChatGPT as a tool for self-learning English among EFL learners: A multi-methods study DOI

Nguyen Hoang Mai Tram,

Tin Trung Nguyen, Cong Duc Tran

et al.

System, Journal Year: 2024, Volume and Issue: 127, P. 103528 - 103528

Published: Oct. 28, 2024

Language: Английский

Citations

5

Differences in User Perception of Artificial Intelligence-Driven Chatbots and Traditional Tools in Qualitative Data Analysis DOI Creative Commons
Boštjan Šumak, Maja Pušnik, Ines Kožuh

et al.

Applied Sciences, Journal Year: 2025, Volume and Issue: 15(2), P. 631 - 631

Published: Jan. 10, 2025

Qualitative data analysis (QDA) tools are essential for extracting insights from complex datasets. This study investigates researchers’ perceptions of the usability, user experience (UX), mental workload, trust, task complexity, and emotional impact three tools: Taguette 1.4.1 (a traditional QDA tool), ChatGPT (GPT-4, December 2023 version), Gemini (formerly Google Bard, version). Participants (N = 85), Master’s students Faculty Electrical Engineering Computer Science with prior in UX evaluations familiarity AI-based chatbots, performed sentiment annotation tasks using these tools, enabling a comparative evaluation. The results show that AI were associated lower cognitive effort more positive responses compared to Taguette, which caused higher frustration especially during cognitively demanding tasks. Among achieved highest usability score (SUS 79.03) was rated positively engagement. Trust levels varied, preferred accuracy confidence. Despite differences, all consistently identifying qualitative patterns. These findings suggest AI-driven can enhance experiences while emphasizing need align tool selection specific preferences.

Language: Английский

Citations

0

Can ChatGPT Solve Undergraduate Exams from Warehousing Studies? An Investigation DOI Creative Commons
Sven Franke, Christoph Pott, Jérôme Rutinowski

et al.

Computers, Journal Year: 2025, Volume and Issue: 14(2), P. 52 - 52

Published: Feb. 5, 2025

The performance of Large Language Models, such as ChatGPT, generally increases with every new model release. In this study, we investigated to what degree different GPT models were able solve the exams three undergraduate courses on warehousing. We contribute discussion ChatGPT’s existing logistics knowledge, particularly in field Both free version (GPT-4o mini) and premium (GPT-4o) completed warehousing using prompting techniques (with without role assignments experts or students). o1-preview was also used (without a assignment) for six runs. tests repeated times. A total 60 conducted compared in-class results students. show that passed 46 tests. best run solved 93% exam correctly. Compared students from respective semester, ChatGPT outperformed one exam. other two exams, performed better average than ChatGPT.

Language: Английский

Citations

0

Utilizing Large Language Models for Educating Patients About Polycystic Ovary Syndrome in China: A Two-Phase Study (Preprint) DOI Creative Commons

X. Chen

Published: Feb. 17, 2025

BACKGROUND Polycystic ovary syndrome (PCOS) is a prevalent condition requiring effective patient education, particularly in China. Large language models (LLMs) present promising avenue for this. This two-phase study evaluates six LLMs educating Chinese patients about PCOS. It assesses their capabilities answering questions, interpreting ultrasound images, and providing instructions within real-world clinical setting OBJECTIVE systematically evaluated gigantic models—Gemini 2.0 Pro, OpenAI o1, ChatGPT-4o, ChatGPT-4, ERINE 4.0, GLM-4—for use gynecological medicine. assessed performance several areas: questions from the Gynecology Qualification Examination, understanding coping with polycystic cases, writing instructions, helping to solve problems. METHODS A two-step evaluation method was used. Primarily, they tested frameworks on 136 exam 36 images. They then compared results those of medical students residents. Six gynecologists framework's responses 23 PCOS-related using Likert scale, readability tool used review content objectively. In following process, 40 PCOS two central systems, Gemini Pro o1. them terms satisfaction, text readability, professional evaluation. RESULTS During initial phase testing, o1 demonstrated impressive accuracy specialist achieving rates 93.63% 92.40%, respectively. Additionally, image diagnostic tasks noteworthy, an 69.44% reaching 53.70%. Regarding response significantly outperformed other accuracy, completeness, practicality, safety. However, its were notably more complex (average score 13.98, p = 0.003). The second-phase revealed that excelled (patient rating 3.45, < 0.01; physician 3.35, 0.03), surpassing 2.65, 2.90). slightly lagged behind completeness (3.05 vs. 3.50, 0.04). CONCLUSIONS reveals large have considerable potential address issues faced by PCOS, which are capable accurate comprehensive responses. Nevertheless, it still needs be strengthened so can balance clarity comprehensiveness. addition, big besides analyzing especially ability handle regulation categories, improved meet practice. CLINICALTRIAL None

Language: Английский

Citations

0

Applying IRT to Distinguish Between Human and Generative AI Responses to Multiple-Choice Assessments DOI

Alona Strugatski,

Giora Alexandron

Published: Feb. 21, 2025

Language: Английский

Citations

0

GRADERS OF THE FUTURE: COMPARING THE CONSISTENCY AND ACCURACY OF GPT4 AND PRE-SERVICE TEACHERS IN PHYSICS ESSAY QUESTION ASSESSMENTS DOI Open Access

XU Yu-bin,

Lin Liu, Jianwen Xiong

et al.

Journal of Baltic Science Education, Journal Year: 2025, Volume and Issue: 24(1), P. 187 - 207

Published: Feb. 25, 2025

As the development and application of large language models (LLMs) in physics education progress, well-known AI-based chatbot ChatGPT4 has presented numerous opportunities for educational assessment. Investigating potential AI tools practical assessment carries profound significance. This study explored comparative performance human graders scoring upper-secondary essay questions. Eighty students’ responses to two questions were evaluated by 30 pre-service teachers ChatGPT4. The analysis highlighted their consistency accuracy, including intra-human comparisons, GPT grading at different times, human-GPT variations across cognitive categories. intraclass correlation coefficient (ICC) was used assess consistency, while accuracy illustrated through Pearson with expert scores. findings reveal that demonstrated higher scoring, scorers showed superior most instances. These results underscore strengths limitations using LLMs assessments. high can be valuable standardizing assessments diverse contexts, nuanced understanding flexibility are irreplaceable handling complex subjective evaluations. Keywords: Physics question assessment, grader, Human graders.

Language: Английский

Citations

0

AI-enabled networked learning: A posthuman connectivist approach in an English for specific purposes classroom DOI
Humaira Mariyam B., V. K. Karthika

Education and Information Technologies, Journal Year: 2025, Volume and Issue: unknown

Published: March 31, 2025

Language: Английский

Citations

0

Examining E-tutors Experiences of Facilitating Modules through a Learning Management System: A Case Study of an Open Distance E-Learning Institution DOI Creative Commons
Mpipo Zipporah Sedio

E-Journal of Humanities Arts and Social Sciences, Journal Year: 2025, Volume and Issue: unknown, P. 362 - 376

Published: March 28, 2025

The purpose of this paper was to investigate the facilitation experiences e-tutors who were assigned teach modules through a Learning Management System (LMS). article employed an interpretivism quantitative survey method for articulate their impressions about how LMS leverages them become experts in modules. Constructivism learning theory as lens paper. Quantitative analysis used collect accounts from five and arranged presented tables. e-tutor samples based on criteria set during appointment by case institution. It found that cannot facilitate with LMS. is recommended should be trained able promote teaching using different module courses. study contributes growing literature ODeL e-tutoring model student support. Keywords: Open Distance e-Learning, System,

Language: Английский

Citations

0

Pan-indexicality and prompt: developing a teaching model for AI-mediated academic writing DOI Creative Commons
Jing Zhu, Chunyun Duan

Language and Semiotic Studies, Journal Year: 2025, Volume and Issue: unknown

Published: April 4, 2025

Abstract AI-mediated academic writing calls for new pedagogical approaches to the application of prompt engineering courses. Whereas previous studies mainly inform students techniques, little is known about how functions from perspective meaning negotiation between human and generative AI. This paper explores integration Pan-indexical process linguistic signs into a prompt-based teaching model (PBTM), emphasizing its potential facilitate in during early stage writing. The PBTM consists four key components: encyclopedic knowledge, contextual information, evaluative critical thinking, iterative design. lies idea development organized around major steps: crafting initial prompt; refining with information; engaging thinking; progression toward desired response. suggests that linguistics can be employed enhance students’ ability optimization prompts through deeper understanding AI support their process.

Language: Английский

Citations

0