ChatGPT giving advice on how to cheat in university assignments—how workable are its suggestions? DOI Creative Commons
Dirk Spennemann, Jessica Biles,

Lachlan Brown

и другие.

Research Square (Research Square), Год журнала: 2023, Номер unknown

Опубликована: Сен. 19, 2023

Abstract The generative artificial intelligence (AI) language model ChatGPT is programmed not to provide answers that are unethical or may cause harm people. By setting up user-created role-plays designed alter ChatGPT’s persona, can be prompted answer with inverted moral valence supplying answers. In this mode was asked suggestions on how avoid being detected when commissioning and submitting contract written assignments. We conducted 30 iterations of the task, we examine types suggested strategies their likelihood avoiding detection by markers, or, if detected, escaping a successful investigation academic misconduct. Suggestions made ranged from communications writers general use writing services content blending innovative distraction techniques. While majority has low chance detection, recommendations related obscuring plagiarism as well techniques have higher probability remaining undetected. conclude used success brainstorming tool cheating advice, but its depends vigilance assignment markers student’s ability distinguish between genuinely viable options those appear workable not. some cases advice given would actually decrease

Язык: Английский

ChatGPT and the Generation of Digitally Born “Knowledge”: How Does a Generative AI Language Model Interpret Cultural Heritage Values? DOI Creative Commons
Dirk Spennemann

Knowledge, Год журнала: 2023, Номер 3(3), С. 480 - 512

Опубликована: Сен. 18, 2023

The public release of ChatGPT, a generative artificial intelligence language model, caused wide-spread interest in its abilities but also concern about the implications application on academia, depending whether it was deemed benevolent (e.g., supporting analysis and simplification tasks) or malevolent assignment writing academic misconduct). While ChatGPT has been shown to provide answers sufficient quality pass some university exams, capacity write essays that require an exploration value concepts is unknown. This paper presents results study where ChatGPT-4 (released May 2023) tasked with 1500-word essay discuss nature values used assessment cultural heritage significance. Based 36 iterations, wrote limited length 50% stipulated word count being primarily descriptive without any depth complexity. concepts, which are often flawed suffer from inverted logic, presented arbitrary sequence coherence defined line argument. Given splits uses one more words develop tangential arguments. provides references as tasked, many fictitious, albeit plausible authors titles. At present, ability critique own work seems unable incorporate meaningful way improve previous draft. Setting aside conceptual flaws such several could possibly junior high school fall short what would be expected senior school, let alone at college level.

Язык: Английский

Процитировано

42

Will Artificial Intelligence Affect How Cultural Heritage Will Be Managed in the Future? Responses Generated by Four genAI Models DOI Creative Commons
Dirk Spennemann

Heritage, Год журнала: 2024, Номер 7(3), С. 1453 - 1471

Опубликована: Март 11, 2024

Generative artificial intelligence (genAI) language models have become firmly embedded in public consciousness. Their abilities to extract and summarise information from a wide range of sources their training data attracted the attention many scholars. This paper examines how four genAI large (ChatGPT, GPT4, DeepAI, Google Bard) responded prompts, asking (i) whether would affect cultural heritage will be managed future (with examples requested) (ii) what dangers might emerge when relying heavily on guide professionals actions. The systems provided examples, commonly drawing extending status quo. Without doubt, AI tools revolutionise execution repetitive mundane tasks, such as classification some classes artifacts, or allow for predictive modelling decay objects. Important were used assess purported power extract, aggregate, synthesize volumes multiple sources, well ability recognise patterns connections that people may miss. An inherent risk ‘results’ presented by is are ‘artifacts’ system rather than being genuine. Since present unable purposively generate creative innovative thoughts, it left reader determine any text out ordinary meaningful nonsensical. Additional risks identified use without required level literacy overreliance lead deskilling general practitioners.

Язык: Английский

Процитировано

9

The Origins and Veracity of References ‘Cited’ by Generative Artificial Intelligence Applications: Implications for the Quality of Responses DOI Creative Commons
Dirk Spennemann

Publications, Год журнала: 2025, Номер 13(1), С. 12 - 12

Опубликована: Март 12, 2025

The public release of ChatGPT in late 2022 has resulted considerable publicity and led to widespread discussion the usefulness capabilities generative Artificial intelligence (Ai) language models. Its ability extract summarise data from textual sources present them as human-like contextual responses makes it an eminently suitable tool answer questions users might ask. Expanding on a previous analysis ChatGPT3.5, this paper tested what archaeological literature appears have been included training phase three recent Ai models: ChatGPT4o, ScholarGPT, DeepSeek R1. While ChatGPT3.5 offered seemingly pertinent references, large percentage proved be fictitious. more model which is purportedly tailored towards academic needs, performed much better, still high rate fictitious references compared general models ChatGPT4o DeepSeek. Using ‘cloze’ make inferences ‘memorized’ by model, was unable prove that any four genAi had perused full texts genuine references. It can shown all provided other OpenAi models, well DeepSeek, were found genuine, also cited Wikipedia pages. This strongly indicates source base for at least some, if not most, those pages thus represents, best, third-hand material. significant implications relation quality available shape their answers. are discussed.

Язык: Английский

Процитировано

1

Exploring Ethical Boundaries: Can ChatGPT Be Prompted to Give Advice on How to Cheat in University Assignments? DOI Open Access
Dirk Spennemann

Опубликована: Авг. 17, 2023

Generative artificial intelligence (AI), in particular large language models such as ChatGPT have reached public consciousness with a wide-ranging discussion of their capabilities and suitability for various professions. The extant literature on the ethics generative AI revolves around its usage application, rather than ethical framework responses provided. In education sector, concerns been raised regard to ability these aid student assignment writing potentially concomitant misconduct work is submitted assessment. Based series ‘conversations’ multiple replicates, using range prompts, this paper examines capability provide advice how cheat assessments. Since release November 2022, numerous authors developed ‘jailbreaking’ techniques trick into answering questions ways other default mode. While mode activates safety awareness mechanism that prevents from providing unethical advice, modes partially or fully bypass elicit answers are outside expected boundaries. provided wide suggestions best university assignments, some solutions common most replicates (‘plausible deniability,’ adjustment contract written text’). Some ChatGPT’s avoid cheating being detected were cunning, if not slightly devious. implications findings discussed.

Язык: Английский

Процитировано

12

Large Language Models as Recommendation Systems in Museums DOI Open Access
Georgios Trichopoulos, Markos Konstantakis, Georgios Alexandridis

и другие.

Electronics, Год журнала: 2023, Номер 12(18), С. 3829 - 3829

Опубликована: Сен. 10, 2023

This paper proposes the utilization of large language models as recommendation systems for museum visitors. Since aforementioned lack notion context, they cannot work with temporal information that is often present in recommendations cultural environments (e.g., special exhibitions or events). In this respect, current aims to enhance capabilities through a fine-tuning process incorporates contextual and user instructions. The resulting are expected be capable providing personalized aligned preferences desires. More specifically, Generative Pre-trained Transformer 4, knowledge-based model fine-tuned turned into context-aware system, adapting its suggestions based on input specific factors such location, time visit, other relevant parameters. effectiveness proposed approach evaluated certain studies, which ensure an improved experience engagement within environment.

Язык: Английский

Процитировано

12

Who Is to Blame for the Bias in Visualizations, ChatGPT or DALL-E? DOI Creative Commons
Dirk Spennemann

AI, Год журнала: 2025, Номер 6(5), С. 92 - 92

Опубликована: Апрель 29, 2025

Due to range of factors in the development stage, generative artificial intelligence (AI) models cannot be completely free from bias. Some biases are introduced by quality training data, and developer influence during both design large language (LLMs), while others text-to-image (T2I) visualization programs. The bias initialization at interface between LLMs T2I applications has not been examined date. This study analyzes 770 images librarians curators generated DALL-E ChatGPT-4o prompts investigate source gender, ethnicity, age these visualizations. Comparing with DALL-E’s visual interpretations, research demonstrates that primarily introduces when provides non-specific prompts. highlights potential for AI perpetuate amplify harmful stereotypes related age, ethnicity professional roles.

Язык: Английский

Процитировано

0

Children of AI: A Protocol for Managing the Born-Digital Ephemera Spawned by ChatGPT DOI Open Access
Dirk Spennemann

Опубликована: Июль 31, 2023

The recent public release of the generative AI language model ChatGPT has captured imagination and resulted a rapid uptake widespread experimentation by general academia alike. number academic publications focusing on capabilities as well practical ethical implications been growing exponentially. One concerns with this unprecedented growth in scholarship related to AI, particular ChatGPT, is that most cases raw data, text original ‘conversations,’ have not made available audience papers thus cannot be drawn assess veracity arguments conclusions therefrom. This paper provides protocol for documentation archiving these data.

Язык: Английский

Процитировано

8

ChatGPT giving advice on how to cheat in university assignments—how workable are its suggestions? DOI Creative Commons
Dirk Spennemann, Jessica Biles,

Lachlan Brown

и другие.

Research Square (Research Square), Год журнала: 2023, Номер unknown

Опубликована: Сен. 19, 2023

Abstract The generative artificial intelligence (AI) language model ChatGPT is programmed not to provide answers that are unethical or may cause harm people. By setting up user-created role-plays designed alter ChatGPT’s persona, can be prompted answer with inverted moral valence supplying answers. In this mode was asked suggestions on how avoid being detected when commissioning and submitting contract written assignments. We conducted 30 iterations of the task, we examine types suggested strategies their likelihood avoiding detection by markers, or, if detected, escaping a successful investigation academic misconduct. Suggestions made ranged from communications writers general use writing services content blending innovative distraction techniques. While majority has low chance detection, recommendations related obscuring plagiarism as well techniques have higher probability remaining undetected. conclude used success brainstorming tool cheating advice, but its depends vigilance assignment markers student’s ability distinguish between genuinely viable options those appear workable not. some cases advice given would actually decrease

Язык: Английский

Процитировано

4