Exploring the role of large language models in radiation emergency response DOI Creative Commons
Anirudh Chandra, Abinash Chakraborty

Journal of Radiological Protection, Journal Year: 2024, Volume and Issue: 44(1), P. 011510 - 011510

Published: Feb. 7, 2024

Abstract In recent times, the field of artificial intelligence (AI) has been transformed by introduction large language models (LLMs). These models, popularized OpenAI’s GPT-3, have demonstrated emergent capabilities AI in comprehending and producing text resembling human language, which helped them transform several industries. But its role yet to be explored nuclear industry, specifically managing radiation emergencies. The present work explores LLMs’ contextual awareness, natural interaction, their capacity comprehend diverse queries a emergency response setting. this study we identify different user types specific LLM use-cases Their possible interactions with ChatGPT, popular LLM, also simulated preliminary results are presented. Drawing on insights gained from exercise address concerns reliability misinformation, advocates for expert guided domain-specific LLMs trained safety protocols historical data. This aims guide management practitioners decision-makers effectively incorporating into decision support framework.

Language: Английский

Generating meaning: active inference and the scope and limits of passive AI DOI Creative Commons
Giovanni Pezzulo, Thomas Parr, Paul Cisek

et al.

Trends in Cognitive Sciences, Journal Year: 2023, Volume and Issue: 28(2), P. 97 - 112

Published: Nov. 15, 2023

Prominent accounts of sentient behavior depict brains as generative models organismic interaction with the world, evincing intriguing similarities current advances in artificial intelligence (AI). However, because they contend control purposive, life-sustaining sensorimotor interactions, living organisms are inextricably anchored to body and world. Unlike passive learned by AI systems, must capture sensory consequences action. This allows embodied agents intervene upon their worlds ways that constantly put best test, thus providing a solid bedrock is – we argue essential development genuine understanding. We review resulting implications consider future directions for AI.

Language: Английский

Citations

35

Interactive repair and the foundations of language DOI
Mark Dingemanse, N. J. Enfield

Trends in Cognitive Sciences, Journal Year: 2023, Volume and Issue: 28(1), P. 30 - 42

Published: Oct. 16, 2023

Language: Английский

Citations

22

Transparency in research: An analysis of ChatGPT usage acknowledgment by authors across disciplines and geographies DOI
Raghu Raman

Accountability in Research, Journal Year: 2023, Volume and Issue: unknown, P. 1 - 22

Published: Oct. 25, 2023

This investigation systematically reviews the recognition of generative AI tools, particularly ChatGPT, in scholarly literature. Utilizing 1,226 publications from Dimensions database, ranging November 2022 to July 2023, research scrutinizes temporal trends and distribution across disciplines regions. U.S.-based authors lead acknowledgments, with notable contributions China India. Predominantly, Biomedical Clinical Sciences, as well Information Computing are engaging these tools. Publications like "The Lancet Digital Health" platforms such "bioRxiv" recurrent venues for highlighting AI's growing impact on dissemination. The analysis is confined thus potentially overlooking other sources grey Additionally, study abstains examining acknowledgments' quality or ethical considerations. Findings beneficial stakeholders, providing a basis policy discourse use academia. represents inaugural comprehensive empirical assessment acknowledgment patterns academic contexts, addressing previously unexplored aspect communication.

Language: Английский

Citations

22

Evaluating Privacy Compliance in Commercial Large Language Models - ChatGPT, Claude, and Gemini DOI Creative Commons

Oliver Cartwright,

H. Flanders Dunbar,

Theo Radcliffe

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: July 26, 2024

Abstract The integration of artificial intelligence systems into various domains has raised significant privacy concerns, necessitating stringent regulatory measures to protect user data. Evaluating the compliance commercial large language models (LLMs) such as ChatGPT-4o, Claude Sonet, and Gemini Flash under EU AI Act presents a novel approach, providing critical insights their adherence standards. study utilized hypothetical case studies assess practices these LLMs, focusing on data collection, storage, sharing mechanisms. Findings revealed that ChatGPT-4o exhibited issues with minimization access control, while Sonet demonstrated robust effective security measures. However, showed inconsistencies in collection higher incidence anonymization failures. comparative analysis underscored importance tailored strategies continuous monitoring ensure compliance. These results provide valuable for developers policymakers, emphasizing necessity multifaceted approach deployment LLMs.

Language: Английский

Citations

13

Attributions toward artificial agents in a modified Moral Turing Test DOI Creative Commons
Eyal Aharoni, Sharlene Fernandes,

Daniel J. Brady

et al.

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: April 30, 2024

Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired Allen et al. (Exp Theor Artif Intell 352:24-28, 2004) proposal, asking distinguish real human from those made popular advanced language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality when blinded their source. Remarkably, they AI's reasoning as superior humans' along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what colleagues call comparative MTT. Next, tasked identifying source each evaluation (human or computer), performed significantly above chance levels. Although did not pass this test, was because its inferior but, potentially, perceived superiority, among other possible explanations. The emergence models capable producing responses raises concerns that may uncritically accept potentially harmful guidance AI. This possibility highlights need for safeguards around generative matters morality.

Language: Английский

Citations

11

Large language models in psychiatry: Opportunities and challenges DOI
Sebastian Volkmer, Andreas Meyer‐Lindenberg, Emanuel Schwarz

et al.

Psychiatry Research, Journal Year: 2024, Volume and Issue: 339, P. 116026 - 116026

Published: June 12, 2024

Language: Английский

Citations

11

Exploiting Privacy Vulnerabilities in Open Source LLMs Using Maliciously Crafted Prompts DOI Creative Commons

Géraud Choquet,

Aimée Aizier,

Gwenaëlle Bernollin

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: June 18, 2024

Abstract The proliferation of AI technologies has brought to the forefront concerns regarding privacy and security user data, particularly with increasing deployment powerful language models such as Llama. A novel concept investigated involves inducing breaches through maliciously crafted prompts, highlighting potential for these inadvertently reveal sensitive information. study systematically evaluated vulnerabilities Llama model, employing an automated framework test analyze its responses a variety inputs. Findings significant flaws, demonstrating model's susceptibility adversarial attacks that could compromise privacy. Comprehensive analysis provided insights into types prompts most effective in eliciting private demonstrates necessity robust regulatory frameworks advanced measures. implications findings are profound, calling immediate action enhance protocols LLMs protect against breaches. Enhanced oversight continuous innovation privacy-preserving techniques crucial ensuring safe various applications. derived from this research contribute deeper understanding LLM urgent need improved safeguards prevent data leakage unauthorized access.

Language: Английский

Citations

11

Foundation models are platform models: Prompting and the political economy of AI DOI Creative Commons
Sarah Burkhardt, Bernhard Rieder

Big Data & Society, Journal Year: 2024, Volume and Issue: 11(2)

Published: April 22, 2024

A recent innovation in the field of machine learning has been creation very large pre-trained models, also referred to as ‘foundation models’, that draw on much larger and broader sets data than typical deep systems can be applied a wide variety tasks. Underpinning text-based such OpenAI's ChatGPT image generators Midjourney, these models have received extraordinary amounts public attention, part due their reliance prompting main technique direct apply them. This paper thus uses an entry point into critical study foundation implications. The proceeds follows: In first section, we introduce more detail, outline some critiques, present our general approach. We then discuss algorithmic technique, show how it makes programmable, explain enables different audiences use (computational) platforms. third link material properties technologies under scrutiny questions political economy, discussing, turn, user interactions, reordered cost structures, centralization lock-in. conclude by arguing further strengthen Big Tech's dominance over computing and, through broad applicability, many other economic sectors, challenging capacities for appraisal regulatory response.

Language: Английский

Citations

9

Enhancing Inference Efficiency in Large Language Models through Rapid Feed-Forward Information Propagation DOI Open Access

Damian Gomez,

Julian Escobar

Published: June 13, 2024

The increasing complexity and computational demands of language models require innovations to enhance their efficiency performance. novel approach rapid feed-forward information propagation presents significant advancements by optimizing the architecture Mistral Large model, leading substantial improvements in inference speed memory usage. Comprehensive architectural modifications, including parameter sharing reduced layer depth, streamlined model's processes, while integration additional pathways mixed-precision training further optimized its efficiency. Detailed experimental results demonstrate effectiveness these enhancements, showing marked latency, throughput, accuracy across various benchmark datasets. study also highlights robustness scalability, ensuring reliable performance diverse deployment scenarios. implications findings are profound, providing a framework for developing more efficient, scalable, high-performing models, with broad applicability real-world natural processing tasks.

Language: Английский

Citations

9

Enhancing Contextual Understanding of Mistral LLM with External Knowledge Bases DOI Creative Commons

Miyu Sasaki,

Natsumi Watanabe,

Tsukihito Komanaka

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: April 5, 2024

Abstract This study explores the enhancement of contextual understanding and factual accuracy in Language Learning Models (LLMs), specifically Mistral LLM, through integration external knowledge bases. We developed a novel methodology for dynamically incorporating real-time information from diverse sources, aiming to address inherent limitations LLMs rooted their training datasets. Our experiments demonstrated significant improvements accuracy, precision, recall, F1 score, alongside qualitative enhancements response relevance accuracy. The research also tackled computational challenges integrating knowledge, ensuring model's efficiency practical applicability. work not only highlights potential bases augment capabilities but sets stage future advancements creating more intelligent, adaptable, contextually aware AI systems. findings contribute broader field NLP by offering insights into overcoming traditional LLMs, presenting step toward developing systems with enhanced real-world applicability accessibility.

Language: Английский

Citations

8