Two paths of balancing technology and ethics: A comparative study on AI governance in China and Germany DOI
Viktor Tuzov, Fen Lin

Telecommunications Policy, Journal Year: 2024, Volume and Issue: unknown, P. 102850 - 102850

Published: Sept. 1, 2024

Language: Английский

Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3) DOI Creative Commons
Nassim Dehouche

Ethics in Science and Environmental Politics, Journal Year: 2021, Volume and Issue: 21, P. 17 - 23

Published: Jan. 18, 2021

As if 2020 was not a peculiar enough year, its fifth month saw the relatively quiet publication of preprint describing most powerful natural language processing (NLP) system to date—GPT-3 (Generative Pre-trained Transformer-3)—created by Silicon Valley research firm OpenAI. Though software implementation GPT-3 is still in initial beta release phase, and full capabilities are unknown as time this writing, it has been shown that artificial intelligence can comprehend prompts language, on virtually any topic, generate relevant original text content indistinguishable from human writing. Moreover, access these capabilities, limited yet worrisome extent, available general public. This paper presents examples generated author using GPT-3. These illustrate some comprehending generating convincing response. I use raise specific fundamental questions pertaining intellectual property potential facilitate plagiarism. The goal instigate sense urgency, well present tardiness part academic community addressing questions.

Language: Английский

Citations

244

Large language models (LLMs): survey, technical frameworks, and future challenges DOI Creative Commons
Pranjal Kumar

Artificial Intelligence Review, Journal Year: 2024, Volume and Issue: 57(10)

Published: Aug. 18, 2024

Artificial intelligence (AI) has significantly impacted various fields. Large language models (LLMs) like GPT-4, BARD, PaLM, Megatron-Turing NLG, Jurassic-1 Jumbo etc., have contributed to our understanding and application of AI in these domains, along with natural processing (NLP) techniques. This work provides a comprehensive overview LLMs the context modeling, word embeddings, deep learning. It examines diverse fields including text generation, vision-language models, personalized learning, biomedicine, code generation. The paper offers detailed introduction background on LLMs, facilitating clear their fundamental ideas concepts. Key modeling architectures are also discussed, alongside survey recent works employing LLM methods for downstream tasks across different domains. Additionally, it assesses limitations current approaches highlights need new methodologies potential directions significant advancements this field.

Language: Английский

Citations

44

Formal verification of neural network controlled autonomous systems DOI Open Access
Xiaowu Sun, Haitham Khedr, Yasser Shoukry

et al.

Published: April 8, 2019

In this paper, we consider the problem of formally verifying safety an autonomous robot equipped with a Neural Network (NN) controller that processes LiDAR images to produce control actions. Given workspace is characterized by set polytopic obstacles, our objective compute safe initial states such trajectory starting from these guaranteed avoid obstacles. Our approach construct finite state abstraction system and use standard reachability analysis over states. To mathematically model imaging function, maps position image, introduce notion imaging-adapted partitions in which function be affine. partitioning, discrete-time linear dynamics robot, pre-trained NN Rectified Linear Unit (ReLU) non-linearity, utilize Satisfiability Modulo Convex (SMC) encoding enumerate all possible assignments different ReLUs. accelerate process, develop pre-processing algorithm could rapidly prune space feasible ReLU assignments. Finally, demonstrate efficiency proposed algorithms using numerical simulations increasing complexity neural network controller.

Language: Английский

Citations

122

A survey on artificial intelligence assurance DOI Creative Commons
Feras A. Batarseh, Laura E. Beane Freeman,

Chih‐Hao Huang

et al.

Journal Of Big Data, Journal Year: 2021, Volume and Issue: 8(1)

Published: April 26, 2021

Abstract Artificial Intelligence (AI) algorithms are increasingly providing decision making and operational support across multiple domains. AI includes a wide (and growing) library of that could be applied for different problems. One important notion the adoption into processes is concept assurance. The literature on assurance, unfortunately, conceals its outcomes within tangled landscape conflicting approaches, driven by contradicting motivations, assumptions, intuitions. Accordingly, albeit rising novel area, this manuscript provides systematic review research works relevant to between years 1985 2021, aims provide structured alternative landscape. A new assurance definition adopted presented, methods contrasted tabulated. Additionally, ten-metric scoring system developed introduced evaluate compare existing methods. Lastly, in manuscript, we foundational insights, discussions, future directions, roadmap, applicable recommendations development deployment

Language: Английский

Citations

84

WAx: An integrated conceptual framework for the analysis of cyber-socio-technical systems DOI Creative Commons
Riccardo Patriarca, Andrea Falegnami, Francesco Costantino

et al.

Safety Science, Journal Year: 2021, Volume and Issue: 136, P. 105142 - 105142

Published: Jan. 13, 2021

Modern work domains are constituted by an intertwined set of social and technical actors with different, often conflicting, functional purposes. These agents act jointly to ensure system's functioning under both expected unexpected working conditions. Considering the increasing digitalization automation processes, socio-technical systems progressively including interconnected cyber artefacts, thus becoming cyber-socio-technical (CSTSs). Adopting a natural science perspective, this paper aims explore knowledge creation conversion within CSTSs, as rooted in in-depth analysis practices contexts. The proposes conceptual framework which unveils relationships between different representations, i.e. relying on Work-As-Imagined, Work-As-Done, Work-As-Disclosed, Work-As-Observed, intended entities generated agents, sharp-end operators, blunt-end analysts. recursive fractal nature proposed WAx (Work-As-x) ensures its adaptability for granularity levels analysis, fostering understanding, modeling, practices, while abandoning reductionist over-simplistic approaches.

Language: Английский

Citations

73

Artificial Intelligence for Safety-Critical Systems in Industrial and Transportation Domains: A Survey DOI Open Access
Jon Pérez, Jaume Abella, Markus Borg

et al.

ACM Computing Surveys, Journal Year: 2023, Volume and Issue: 56(7), P. 1 - 40

Published: Oct. 11, 2023

Artificial Intelligence (AI) can enable the development of next-generation autonomous safety-critical systems in which Machine Learning (ML) algorithms learn optimized and safe solutions. AI also support assist human safety engineers developing systems. However, reconciling both cutting-edge state-of-the-art technology with engineering processes standards is an open challenge that must be addressed before fully embraced Many works already address this challenge, resulting a vast fragmented literature. Focusing on industrial transportation domains, survey structures analyzes challenges, techniques, methods for AI-based systems, from traditional functional to trustworthiness spans several dimensions, such as engineering, ethics legal, focuses dimension.

Language: Английский

Citations

34

Managing the risks of artificial general intelligence: A human factors and ergonomics perspective DOI Creative Commons
Paul M. Salmon, Chris Baber, Catherine M. Burns

et al.

Human Factors and Ergonomics in Manufacturing & Service Industries, Journal Year: 2023, Volume and Issue: 33(5), P. 366 - 378

Published: May 28, 2023

Abstract Artificial General Intelligence (AGI) is the next and forthcoming evolution of (AI). Though there could be significant benefits to society, are also concerns that AGI pose an existential threat. The critical role Human Factors Ergonomics (HFE) in design safe, ethical, usable has been emphasized; however, little evidence suggest HFE currently influencing development programs. Further, given broad spectrum application areas, it not clear what activities required fulfill this role. This article presents perspectives 10 researchers working AI safety on potential risks associated with AGI, concepts require consideration during design, for its humanity's final invention. a diverse set presented, agreement potentially poses threat, many should considered operation. A range proposed, including collaboration developers, dissemination work other relevant disciplines, embedment throughout lifecycle, systems methods help identify manage risks.

Language: Английский

Citations

28

“Good Robot!”: Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer DOI Creative Commons
Andrew Hundt, Benjamin D. Killeen, Nicholas D. E. Greene

et al.

IEEE Robotics and Automation Letters, Journal Year: 2020, Volume and Issue: 5(4), P. 6724 - 6731

Published: Aug. 11, 2020

Current Reinforcement Learning (RL) algorithms struggle with long-horizon tasks where time can be wasted exploring dead ends and task progress may easily reversed. We develop the SPOT framework, which explores within action safety zones, learns about unsafe regions without them, prioritizes experiences that reverse earlier to learn remarkable efficiency. The framework successfully completes simulated trials of a variety tasks, improving baseline trial success rate from 13% 100% when stacking 4 cubes, 99% creating rows 84% 95% clearing toys arranged in adversarial patterns. Efficiency respect actions per typically improves by 30% or more, while training takes just 1-20k actions, depending on task. Furthermore, we demonstrate direct sim real transfer. are able create stacks 61% efficiency 59% directly loading simulation-trained model robot no additional real-world fine-tuning. To our knowledge, this is first instance reinforcement learning successful transfer applied long term multi-step such as block-stacking row-making consideration reversal. Code available at https://github.com/jhu-lcsr/good_robot .

Language: Английский

Citations

57

Navigating artificial general intelligence development: societal, technological, ethical, and brain-inspired pathways DOI Creative Commons
Raghu Raman, Robin M. Kowalski, Krishnashree Achuthan

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: March 11, 2025

This study examines the imperative to align artificial general intelligence (AGI) development with societal, technological, ethical, and brain-inspired pathways ensure its responsible integration into human systems. Using PRISMA framework BERTopic modeling, it identifies five key shaping AGI's trajectory: (1) societal integration, addressing broader impacts, public adoption, policy considerations; (2) technological advancement, exploring real-world applications, implementation challenges, scalability; (3) explainability, enhancing transparency, trust, interpretability in AGI decision-making; (4) cognitive ethical considerations, linking evolving architectures frameworks, accountability, consequences; (5) systems, leveraging neural models improve learning efficiency, adaptability, reasoning capabilities. makes a unique contribution by systematically uncovering underexplored themes, proposing conceptual that connects AI advancements practical multifaceted technical, challenges of development. The findings call for interdisciplinary collaboration bridge critical gaps governance, alignment while strategies equitable access, workforce adaptation, sustainable integration. Additionally, highlights emerging research frontiers, such as AGI-consciousness interfaces collective offering new integrate human-centered applications. By synthesizing insights across disciplines, this provides comprehensive roadmap guiding ways balance innovation responsibilities, advancing progress well-being.

Language: Английский

Citations

1

Scalar reward is not enough: a response to Silver, Singh, Precup and Sutton (2021) DOI Creative Commons
Peter Vamplew, Benjamin J. Smith, Johan Källström

et al.

Autonomous Agents and Multi-Agent Systems, Journal Year: 2022, Volume and Issue: 36(2)

Published: July 16, 2022

Abstract The recent paper “Reward is Enough” by Silver, Singh, Precup and Sutton posits that the concept of reward maximisation sufficient to underpin all intelligence, both natural artificial, provides a suitable basis for creation artificial general intelligence. We contest underlying assumption Silver et al. such can be scalar-valued. In this we explain why scalar rewards are insufficient account some aspects biological computational argue in favour explicitly multi-objective models maximisation. Furthermore, contend even if functions trigger intelligent behaviour specific cases, type development human-aligned intelligence due unacceptable risks unsafe or unethical behaviour.

Language: Английский

Citations

24