Applied Soft Computing, Год журнала: 2024, Номер 170, С. 112663 - 112663
Опубликована: Дек. 25, 2024
Язык: Английский
Applied Soft Computing, Год журнала: 2024, Номер 170, С. 112663 - 112663
Опубликована: Дек. 25, 2024
Язык: Английский
Physics of Life Reviews, Год журнала: 2025, Номер 52, С. 180 - 193
Опубликована: Янв. 5, 2025
The pursuit of artificial consciousness requires conceptual clarity to navigate its theoretical and empirical challenges. This paper introduces a composite, multilevel, multidimensional model as heuristic framework guide research in this field. Consciousness is treated complex phenomenon, with distinct constituents dimensions that can be operationalized for study evaluating their replication. We argue provides balanced approach by avoiding binary thinking (e.g., conscious vs. non-conscious) offering structured basis testable hypotheses. To illustrate utility, we focus on "awareness" case study, demonstrating how specific pragmatically analyzed targeted potential instantiation. By breaking down the intricacies aligning them practical goals, lays groundwork robust strategy advance scientific technical understanding consciousness.
Язык: Английский
Процитировано
0Опубликована: Апрель 24, 2025
In subjective decision-making, where decisions are based on contextual interpretation, Large Language Models (LLMs) can be integrated to present users with additional rationales consider. The diversity of these is mediated by the ability consider perspectives different social actors. However, it remains unclear whether and how models differ in distribution they provide. We compare taken humans LLMs when assessing subtle sexism scenarios. show that classified within a finite set (perpetrator, victim, decision-maker), consistently argumentations produced LLMs, but distributions combinations, demonstrating differences similarities human responses, between models. argue for need systematically evaluate LLMs' perspective-taking identify most suitable given decision-making task. discuss implications model evaluation.
Язык: Английский
Процитировано
0Scientific Reports, Год журнала: 2025, Номер 15(1)
Опубликована: Март 11, 2025
This study examines the imperative to align artificial general intelligence (AGI) development with societal, technological, ethical, and brain-inspired pathways ensure its responsible integration into human systems. Using PRISMA framework BERTopic modeling, it identifies five key shaping AGI's trajectory: (1) societal integration, addressing broader impacts, public adoption, policy considerations; (2) technological advancement, exploring real-world applications, implementation challenges, scalability; (3) explainability, enhancing transparency, trust, interpretability in AGI decision-making; (4) cognitive ethical considerations, linking evolving architectures frameworks, accountability, consequences; (5) systems, leveraging neural models improve learning efficiency, adaptability, reasoning capabilities. makes a unique contribution by systematically uncovering underexplored themes, proposing conceptual that connects AI advancements practical multifaceted technical, challenges of development. The findings call for interdisciplinary collaboration bridge critical gaps governance, alignment while strategies equitable access, workforce adaptation, sustainable integration. Additionally, highlights emerging research frontiers, such as AGI-consciousness interfaces collective offering new integrate human-centered applications. By synthesizing insights across disciplines, this provides comprehensive roadmap guiding ways balance innovation responsibilities, advancing progress well-being.
Язык: Английский
Процитировано
0EMBO Reports, Год журнала: 2025, Номер unknown
Опубликована: Март 24, 2025
Язык: Английский
Процитировано
0Computers in Human Behavior Artificial Humans, Год журнала: 2024, Номер unknown, С. 100107 - 100107
Опубликована: Ноя. 1, 2024
Язык: Английский
Процитировано
0Applied Soft Computing, Год журнала: 2024, Номер 170, С. 112663 - 112663
Опубликована: Дек. 25, 2024
Язык: Английский
Процитировано
0