ChatGPT as a Digital Assistant for Archaeology: Insights from the Smart Anomaly Detection Assistant Development DOI Creative Commons
Gabriele Ciccone

Heritage, Год журнала: 2024, Номер 7(10), С. 5428 - 5445

Опубликована: Сен. 30, 2024

The introduction of generative AI has the potential to radically transform various fields research, including archaeology. This study explores AI, specifically ChatGPT, in developing a computer application for analyzing aerial and satellite images detect archaeological anomalies. main focus was not on itself but evaluating ChatGPT’s effectiveness as an IT assistant humanistic researchers. Starting with simple prompt analyze multispectral orthophoto, developed through successive iterations, improved continuous interactions ChatGPT. Various technical methodological challenges were addressed, leading creation functional multiple features, analysis methods tools. process demonstrated how use large language models (LLMs) can break down barriers between humanities science disciplines, enabling researchers without programming skills develop complex applications short time.

Язык: Английский

Mapping the Ethics of Generative AI: A Comprehensive Scoping Review DOI Creative Commons
Thilo Hagendorff

Minds and Machines, Год журнала: 2024, Номер 34(4)

Опубликована: Сен. 17, 2024

Язык: Английский

Процитировано

27

Security and Privacy Challenges of Large Language Models: A Survey DOI Open Access
Badhan Chandra Das, M. Hadi Amini, Yanzhao Wu

и другие.

ACM Computing Surveys, Год журнала: 2025, Номер unknown

Опубликована: Янв. 13, 2025

Large language models (LLMs) have demonstrated extraordinary capabilities and contributed to multiple fields, such as generating summarizing text, translation, question-answering. Nowadays, LLMs become very popular tools in natural processing (NLP) tasks, with the capability analyze complicated linguistic patterns provide relevant responses depending on context. While offering significant advantages, these are also vulnerable security privacy attacks, jailbreaking data poisoning personally identifiable information (PII) leakage attacks. This survey provides a thorough review of challenges LLMs, along application-based risks various domains, transportation, education, healthcare. We assess extent LLM vulnerabilities, investigate emerging attacks against potential defense mechanisms. Additionally, outlines existing research gaps highlights future directions.

Язык: Английский

Процитировано

21

Will Artificial Intelligence Affect How Cultural Heritage Will Be Managed in the Future? Responses Generated by Four genAI Models DOI Creative Commons
Dirk Spennemann

Heritage, Год журнала: 2024, Номер 7(3), С. 1453 - 1471

Опубликована: Март 11, 2024

Generative artificial intelligence (genAI) language models have become firmly embedded in public consciousness. Their abilities to extract and summarise information from a wide range of sources their training data attracted the attention many scholars. This paper examines how four genAI large (ChatGPT, GPT4, DeepAI, Google Bard) responded prompts, asking (i) whether would affect cultural heritage will be managed future (with examples requested) (ii) what dangers might emerge when relying heavily on guide professionals actions. The systems provided examples, commonly drawing extending status quo. Without doubt, AI tools revolutionise execution repetitive mundane tasks, such as classification some classes artifacts, or allow for predictive modelling decay objects. Important were used assess purported power extract, aggregate, synthesize volumes multiple sources, well ability recognise patterns connections that people may miss. An inherent risk ‘results’ presented by is are ‘artifacts’ system rather than being genuine. Since present unable purposively generate creative innovative thoughts, it left reader determine any text out ordinary meaningful nonsensical. Additional risks identified use without required level literacy overreliance lead deskilling general practitioners.

Язык: Английский

Процитировано

9

The Origins and Veracity of References ‘Cited’ by Generative Artificial Intelligence Applications: Implications for the Quality of Responses DOI Creative Commons
Dirk Spennemann

Publications, Год журнала: 2025, Номер 13(1), С. 12 - 12

Опубликована: Март 12, 2025

The public release of ChatGPT in late 2022 has resulted considerable publicity and led to widespread discussion the usefulness capabilities generative Artificial intelligence (Ai) language models. Its ability extract summarise data from textual sources present them as human-like contextual responses makes it an eminently suitable tool answer questions users might ask. Expanding on a previous analysis ChatGPT3.5, this paper tested what archaeological literature appears have been included training phase three recent Ai models: ChatGPT4o, ScholarGPT, DeepSeek R1. While ChatGPT3.5 offered seemingly pertinent references, large percentage proved be fictitious. more model which is purportedly tailored towards academic needs, performed much better, still high rate fictitious references compared general models ChatGPT4o DeepSeek. Using ‘cloze’ make inferences ‘memorized’ by model, was unable prove that any four genAi had perused full texts genuine references. It can shown all provided other OpenAi models, well DeepSeek, were found genuine, also cited Wikipedia pages. This strongly indicates source base for at least some, if not most, those pages thus represents, best, third-hand material. significant implications relation quality available shape their answers. are discussed.

Язык: Английский

Процитировано

1

Exploring Ethical Boundaries: Can ChatGPT Be Prompted to Give Advice on How to Cheat in University Assignments? DOI Open Access
Dirk Spennemann

Опубликована: Авг. 17, 2023

Generative artificial intelligence (AI), in particular large language models such as ChatGPT have reached public consciousness with a wide-ranging discussion of their capabilities and suitability for various professions. The extant literature on the ethics generative AI revolves around its usage application, rather than ethical framework responses provided. In education sector, concerns been raised regard to ability these aid student assignment writing potentially concomitant misconduct work is submitted assessment. Based series ‘conversations’ multiple replicates, using range prompts, this paper examines capability provide advice how cheat assessments. Since release November 2022, numerous authors developed ‘jailbreaking’ techniques trick into answering questions ways other default mode. While mode activates safety awareness mechanism that prevents from providing unethical advice, modes partially or fully bypass elicit answers are outside expected boundaries. provided wide suggestions best university assignments, some solutions common most replicates (‘plausible deniability,’ adjustment contract written text’). Some ChatGPT’s avoid cheating being detected were cunning, if not slightly devious. implications findings discussed.

Язык: Английский

Процитировано

12

Auditing GPT's Content Moderation Guardrails: Can ChatGPT Write Your Favorite TV Show? DOI Creative Commons

Yaaseen Mahomed,

Charlie M. Crawford, Sanjana Gautam

и другие.

2022 ACM Conference on Fairness, Accountability, and Transparency, Год журнала: 2024, Номер 2020, С. 660 - 686

Опубликована: Июнь 3, 2024

Large language models (LLMs) are increasingly appearing in consumer-facing products. To prevent problematic use, the organizations behind these systems have put content moderation guardrails place that from generating they consider harmful. However, most of enforcement standards and processes opaque. Although play a major role user experience tools, automated tools received relatively less attention than other aspects models. This study undertakes an algorithm audit OpenAI's ChatGPT with goal better understanding its their potential biases. evaluate performance on broad cultural range content, we generate dataset 100 popular United States television shows one to three synopses for each episode first season show (3,309 total synopses). We probe GPT's endpoint (ME) identify violating both themselves, own outputs when asked script based synopsis, also comparing ME 81 real scripts same TV (269,578 outputs). Our findings large number GPT-generated flag as violations (about 18% GPT 69% ones). Using metadata, find maturity ratings, well certain genres (Animation, Crime, Fantasy, others) statistically significantly related script's likelihood flagging. conclude by discussing implications LLM self-censorship directions future research procedures.

Язык: Английский

Процитировано

5

Academic integrity and the use of ChatGPT by EFL pre-service teachers DOI Open Access
Mohamad Ahmad Saleem Khasawneh

Journal of Infrastructure Policy and Development, Год журнала: 2024, Номер 8(7), С. 4783 - 4783

Опубликована: Июль 29, 2024

Academic integrity has been at the centre of discussion adoption Chat GPT by academics in their research. This study explored how academic mitigates desire to use ChatGPT tasks EFL Pre-service teachers, consideration time factor, perceived peer influence, self-effectiveness, and self-esteem. The utilized web-based questionnaires elicit data from 300 teachers across educational fields drawn different schools world. Analysis was conducted using relevant statistical measures test projected four hypotheses. findings provide evidence support Hypothesis 1, with a statistically significant path coefficient (β) 0.442, t-value 3.728, p-value 0.000. hypothesis acceptance implies that when improves, impact time-saving aspect Across decreases. suggests who have firm dedication honesty are less influenced tempting appeal ChatGPT’s features, highlighting ethical factors influence decision-making. also for 2, indicating substantial inverse relationship 0.369, 5.629, 0.001. These indicate stronger adherence is linked diminished effect colleagues on choice tasks. results suggest serves as protective barrier against exogenous pressures or influences it comes embracing cutting-edge technology. However, general, these revealed there negative association between academically related (e.g., sense pressure, language self-confidence, competence), well an attitude toward commitment towards integrity.

Язык: Английский

Процитировано

2

ChatGPT giving advice on how to cheat in university assignments—how workable are its suggestions? DOI Creative Commons
Dirk Spennemann, Jessica Biles,

Lachlan Brown

и другие.

Research Square (Research Square), Год журнала: 2023, Номер unknown

Опубликована: Сен. 19, 2023

Abstract The generative artificial intelligence (AI) language model ChatGPT is programmed not to provide answers that are unethical or may cause harm people. By setting up user-created role-plays designed alter ChatGPT’s persona, can be prompted answer with inverted moral valence supplying answers. In this mode was asked suggestions on how avoid being detected when commissioning and submitting contract written assignments. We conducted 30 iterations of the task, we examine types suggested strategies their likelihood avoiding detection by markers, or, if detected, escaping a successful investigation academic misconduct. Suggestions made ranged from communications writers general use writing services content blending innovative distraction techniques. While majority has low chance detection, recommendations related obscuring plagiarism as well techniques have higher probability remaining undetected. conclude used success brainstorming tool cheating advice, but its depends vigilance assignment markers student’s ability distinguish between genuinely viable options those appear workable not. some cases advice given would actually decrease

Язык: Английский

Процитировано

4

Aussagepsychologische Begutachtung in Zeiten von ChatGPT DOI

Paul K. Jäckel,

Gil Keller,

Melina Popp

и другие.

Praxis der Rechtspsychologie, Год журнала: 2024, Номер 34(1), С. 89 - 102

Опубликована: Июнь 1, 2024

Zusammenfassung: Der Beitrag untersucht Möglichkeiten und Grenzen der Vorbereitung einer aussagenden Person auf eine aussagepsychologische Begutachtung mittels ChatGPT, einem populären Chatbot Basis künstlicher Intelligenz. Dafür wurden allgemeine spezifische Fragen an ChatGPT-3.5 gestellt, um dessen Wissensstand über die Begutachtung, glaubhafte Darstellung im Allgemeinen, wiederentdeckte Erinnerungen, Übertragungswissen erfolgreiches Lügen zu ermitteln. Die Ergebnisse zeigen, dass zwar grundlegende Kenntnisse verfügt. Beispielsweise betont Bedeutung von konsistenten detaillierten Aussagen. Über ein tieferes Sachverständnis verfügt hingegen nicht. Insbesondere bei glaubhaften wiederentdeckten Erinnerungen neigt vereinfachten bis inkorrekten Es kann geschlussfolgert werden, für oberflächliche Informationsbeschaffung zur genutzt werden kann. Tiefe oder gar kritische Einblicke in Begutachtungsmethodik jedoch nicht geben. Implikationen Limitationen des Beitrags abschließend diskutiert.

Процитировано

1

Children of AI: A Protocol for Managing the Born-Digital Ephemera Spawned by Generative AI Language Models DOI Creative Commons
Dirk Spennemann

Publications, Год журнала: 2023, Номер 11(3), С. 45 - 45

Опубликована: Сен. 21, 2023

The recent public release of the generative AI language model ChatGPT has captured imagination and resulted in a rapid uptake widespread experimentation by general academia alike. number academic publications focusing on capabilities as well practical ethical implications been growing exponentially. One concerns with this unprecedented growth scholarship related to AI, particular, ChatGPT, is that, most cases, raw data, which text original ‘conversations,’ have not made available audience papers thus cannot be drawn assess veracity arguments conclusions therefrom. This paper provides protocol for documentation archiving these data.

Язык: Английский

Процитировано

2