Analysis of Generative AI Policies in Computing Course Syllabi DOI Creative Commons

Areej Ali,

Aayushi Hingle, Umama Dewan

и другие.

Опубликована: Фев. 12, 2025

Язык: Английский

Computing Education in the Era of Generative AI DOI Open Access
Paul Denny, James Prather, Brett A. Becker

и другие.

Communications of the ACM, Год журнала: 2024, Номер 67(2), С. 56 - 67

Опубликована: Янв. 18, 2024

Challenges and opportunities faced by computing educators students adapting to LLMs capable of generating accurate source code from natural-language problem descriptions.

Язык: Английский

Процитировано

97

CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes DOI Creative Commons
Mark Liffiton, Brad E. Sheese, Jaromír Šavelka

и другие.

Опубликована: Ноя. 13, 2023

Computing educators face significant challenges in providing timely support to students, especially large class settings. Large language models (LLMs) have emerged recently and show great promise for on-demand help at a scale, but there are concerns that students may over-rely on the outputs produced by these models. In this paper, we introduce CodeHelp, novel LLM-powered tool designed with guardrails provide assistance programming without directly revealing solutions. We detail design of tool, which incorporates number useful features instructors, elaborate pipeline prompting strategies use ensure generated suitable students. To evaluate deployed it first-year computer data science course 52 collected student interactions over 12-week period. examine students' usage patterns perceptions report reflections from instructor series recommendations classroom use. Our findings suggest CodeHelp is well-received who value its availability resolving errors, instructors easy deploy complements, rather than replaces, they

Язык: Английский

Процитировано

70

Large Language Models: A Comprehensive Survey of its Applications, Challenges, Limitations, and Future Prospects DOI Creative Commons
Muhammad Usman Hadi,

qasem al tashi,

Rizwan Qureshi

и другие.

Опубликована: Ноя. 16, 2023

<p>Within the vast expanse of computerized language processing, a revolutionary entity known as Large Language Models (LLMs) has emerged, wielding immense power in its capacity to comprehend intricate linguistic patterns and conjure coherent contextually fitting responses. models are type artificial intelligence (AI) that have emerged powerful tools for wide range tasks, including natural processing (NLP), machine translation, question-answering. This survey paper provides comprehensive overview LLMs, their history, architecture, training methods, applications, challenges. The begins by discussing fundamental concepts generative AI architecture pre- trained transformers (GPT). It then an history evolution over time, different methods been used train them. discusses applications medical, education, finance, engineering. also how LLMs shaping future they can be solve real-world problems. challenges associated with deploying scenarios, ethical considerations, model biases, interpretability, computational resource requirements. highlights techniques enhancing robustness controllability addressing bias, fairness, generation quality issues. Finally, concludes highlighting LLM research need addressed order make more reliable useful. is intended provide researchers, practitioners, enthusiasts understanding evolution, By consolidating state-of-the-art knowledge field, this serves valuable further advancements development utilization applications. GitHub repo project available at https://github.com/anas-zafar/LLM-Survey</p>

Язык: Английский

Процитировано

61

Prompt Problems: A New Programming Exercise for the Generative AI Era DOI Creative Commons
Paul Denny, Juho Leinonen, James Prather

и другие.

Опубликована: Март 7, 2024

Large language models (LLMs) are revolutionizing the field of computing education with their powerful code-generating capabilities. Traditional pedagogical practices have focused on code writing tasks, but there is now a shift in importance towards reading, comprehending and evaluating LLM-generated code. Alongside this shift, an important new skill emerging -- ability to solve programming tasks by constructing good prompts for models. In work we introduce type exercise hone nascent skill: 'Prompt Problems'. Prompt Problems designed help students learn how write effective AI generators. A student solves Problem crafting natural prompt which, when provided as input LLM, outputs that successfully specified task. We also present web-based tool called Promptly which hosts repository supports automated evaluation prompt-generated deploy one CS1 CS2 course describe our experiences, include perceptions activity interactions tool. find enthusiastic about Problems, appreciate problems engage computational thinking skills expose them constructs. discuss ideas future development variations need carefully study integration into classroom practice.

Язык: Английский

Процитировано

54

Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective DOI Creative Commons
Mousa Al-kfairy, Dheya Mustafa, Nir Kshetri

и другие.

Informatics, Год журнала: 2024, Номер 11(3), С. 58 - 58

Опубликована: Авг. 9, 2024

This paper conducts a systematic review and interdisciplinary analysis of the ethical challenges generative AI technologies (N = 37), highlighting significant concerns such as privacy, data protection, copyright infringement, misinformation, biases, societal inequalities. The ability to produce convincing deepfakes synthetic media, which threaten foundations truth, trust, democratic values, exacerbates these problems. combines perspectives from various disciplines, including education, healthcare, underscoring need for systems that promote equity do not perpetuate social It advocates proactive approach development AI, emphasizing necessity establishing policies, guidelines, frameworks prioritize human rights, fairness, transparency. calls multidisciplinary dialogue among policymakers, technologists, researchers ensure responsible conforms values standards. stresses urgency addressing in socially beneficial ethically sound manner, contributing significantly discourse on managing AI’s implications modern digital era. study highlights theoretical practical suggests number future research directions.

Язык: Английский

Процитировано

30

The Widening Gap: The Benefits and Harms of Generative AI for Novice Programmers DOI
James Prather, Brent N. Reeves, Juho Leinonen

и другие.

Опубликована: Авг. 6, 2024

Язык: Английский

Процитировано

27

The promise and challenges of generative AI in education DOI Creative Commons
Michail N. Giannakos, Roger Azevedo, Peter Brusilovsky

и другие.

Behaviour and Information Technology, Год журнала: 2024, Номер unknown, С. 1 - 27

Опубликована: Сен. 2, 2024

Generative artificial intelligence (GenAI) tools, such as large language models (LLMs), generate natural and other types of content to perform a wide range tasks. This represents significant technological advancement that poses opportunities challenges educational research practice. commentary brings together contributions from nine experts working in the intersection learning technology presents critical reflections on opportunities, challenges, implications related GenAI technologies context education. In commentary, it is acknowledged GenAI's capabilities can enhance some teaching practices, design, regulation learning, automated content, feedback, assessment. Nevertheless, we also highlight its limitations, potential disruptions, ethical consequences, misuses. The identified avenues for further include development new insights into roles human play, strong continuous evidence, human-centric design technology, necessary policy, support competence mechanisms. Overall, concur with general skeptical optimism about use tools LLMs Moreover, danger hastily adopting education without deep consideration efficacy, ecosystem-level implications, ethics, pedagogical soundness practices.

Язык: Английский

Процитировано

26

A Comparative Study of AI-Generated (GPT-4) and Human-crafted MCQs in Programming Education DOI Creative Commons
Jacob Arthur Doughty, Zipiao Wan, Anishka Bompelli

и другие.

Опубликована: Янв. 2, 2024

There is a constant need for educators to develop and maintain effective up-to-date assessments. While there growing body of research in computing education on utilizing large language models (LLMs) generation engagement with coding exercises, the use LLMs generating programming MCQs has not been extensively explored. We analyzed capability GPT-4 produce multiple-choice questions (MCQs) aligned specific learning objectives (LOs) from Python classes higher education. Specifically, we developed an LLM-powered (GPT-4) system high-level course context module-level LOs. evaluated 651 LLM-generated 449 human-crafted 246 LOs 6 courses. found that was capable producing clear language, single correct choice, high-quality distractors. also observed generated appeared be well-aligned Our findings can leveraged by wishing take advantage state-of-the-art generative support MCQ authoring efforts.

Язык: Английский

Процитировано

25

Evaluating LLM-generated Worked Examples in an Introductory Programming Course DOI
Breanna Jury, Angela Lorusso, Juho Leinonen

и другие.

Опубликована: Янв. 2, 2024

Worked examples, which illustrate the process for solving a problem step-by-step, are well-established pedagogical technique that has been widely studied in computing classrooms. However, creating high-quality worked examples is very time-intensive educators, and thus learners tend not to have access broad range of such examples. The recent emergence powerful large language models (LLMs), appear capable generating human-like content, may offer solution. Separate strands work shown LLMs can accurately generate code suitable novice audience, they explanations code. Therefore, be well suited overcoming bottleneck manual effort currently required. In this work, we present novel tool, 'WorkedGen', uses an LLM interactive We evaluate tool with both expert assessment user study involving students first-year Python programming course (n = ~400). find prompt chaining one-shot learning useful strategies optimising output when producing Our analysis suggests clear explanations, our classroom deployment revealed LLM-generated their learning. propose several avenues future including investigating WorkedGen's value languages, more complex questions advanced courses.

Язык: Английский

Процитировано

23

Developing evaluative judgement for a time of generative artificial intelligence DOI Creative Commons
Margaret Bearman, Joanna Tai, Phillip Dawson

и другие.

Assessment & Evaluation in Higher Education, Год журнала: 2024, Номер 49(6), С. 893 - 905

Опубликована: Апрель 10, 2024

Generative artificial intelligence (AI) has rapidly increased capacity for producing textual, visual and auditory outputs, yet there are ongoing concerns regarding the quality of those outputs. There is an urgent need to develop students' evaluative judgement – capability judge work self others in recognition this new reality. In conceptual paper, we describe intersection between generative AI with a view articulating how assessment practices can help students learn productively AI. We propose three foci: (1) developing outputs; (2) processes; (3) student judgements. argue capabilities identify calibrate uniquely human at time technological acceleration through existing formative strategies. These approaches circumvent interrupt uncritical usage The relationship more than just application machine have collective responsibility, as educators learners, ensure that humans do not relinquish their roles arbiters quality.

Язык: Английский

Процитировано

20