A Taxonomy of Robot Autonomy for Human-Robot Interaction DOI Creative Commons
Seumg-Hyun Kim, Jacy Reese Anthis, Sarah Sebo

et al.

Published: March 10, 2024

Robot autonomy is an influential and ubiquitous factor in human-robot interaction (HRI), but it rarely discussed beyond a one-dimensional measure of the degree to which robot operates without human intervention. As robots become more sophisticated, this simple view could be expanded capture variety autonomous behaviors can exhibit match rich literature on philosophy, psychology, other fields. In paper, we conduct systematic review HRI integrate with broader into taxonomy six distinct forms autonomy: those based involvement at runtime (operational autonomy, intentional shared autonomy), before (non-deterministic expressions (cognitive physical autonomy). We discuss future considerations for that emerge from study, including moral consequences, idealization "full" connections agency free will.

Language: Английский

The Moral Psychology of Artificial Intelligence DOI
Jean‐François Bonnefon, Iyad Rahwan, Azim Shariff

et al.

Annual Review of Psychology, Journal Year: 2023, Volume and Issue: 75(1), P. 653 - 675

Published: Sept. 19, 2023

Moral psychology was shaped around three categories of agents and patients: humans, other animals, supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral to deal with: intelligent machines. Machines can perform as agents, making decisions that affect the outcomes human patients or solving dilemmas without supervision. be perceived patients, whose affected by decisions, with important consequences human-machine cooperation. proxies send their delegates interactions use disguise these interactions. Here we review experimental literature on machines proxies, focus recent findings open questions they suggest.

Language: Английский

Citations

47

The need for an empirical research program regarding human–AI relational norms DOI Creative Commons
Madeline G. Reinecke, Andreas Kappes, Sebastian Porsdam Mann

et al.

AI and Ethics, Journal Year: 2025, Volume and Issue: 5(1), P. 71 - 80

Published: Jan. 9, 2025

Abstract As artificial intelligence (AI) systems begin to take on social roles traditionally filled by humans, it will be crucial understand how this affects people’s cooperative expectations. In the case of human–human dyads, different relationships are governed norms: For example, two strangers—versus friends or colleagues—should interact when faced with a similar coordination problem often differs. How rise ‘social’ (and ultimately, superintelligent AI) complicate expectations about norms that should govern types relationships, whether human–AI? Do people expect AI adhere same dynamics as humans in given role? Conversely, they certain act more like AI? Here, we consider may pull apart between and human–AI detailing an empirical proposal for mapping these distinctions across relationship types. We see data resulting from our relevant understanding relationship–specific age AI, which also forecast potential resistance towards occupying roles. Finally, can form basis ethical evaluations: What adopt interactions, reinforce through responsible design, depends partly facts what find intuitive such interactions (along costs benefits maintaining these). Toward end paper, discuss relational change over time implications proposed research program.

Language: Английский

Citations

2

Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey DOI
Jacy Reese Anthis, Janet V. T. Pauketat, Ali Ladak

et al.

Published: April 24, 2025

Language: Английский

Citations

1

The AI Double Standard: Humans Judge All AIs for the Actions of One DOI
Aikaterina Manoli, Janet V. T. Pauketat, Jacy Reese Anthis

et al.

Proceedings of the ACM on Human-Computer Interaction, Journal Year: 2025, Volume and Issue: 9(2), P. 1 - 24

Published: May 2, 2025

Robots and other artificial intelligence (AI) systems are widely perceived as moral agents responsible for their actions. As AI proliferates, these perceptions may become entangled via the spillover of attitudes towards one to AIs. We tested how seemingly harmful immoral actions an or human agent spill over AIs humans in two preregistered experiments. In Study 1 ( N = 720), we established effect human-AI interaction by showing that increased attributions negative agency (i.e., acting immorally) decreased positive morally) patiency deserving concern) both (a chatbot assistant) group which they belong (all assistants). There was no significant difference effects between contexts. 2 684), whether persisted when individuated with a name described human, rather than specifically personal assistant. found context but not context, possibly because were more homogeneous due outgroup status relative humans. This asymmetry suggests double standard whereby judged harshly morally transgresses. With proliferation diverse, autonomous systems, HCI research design should account fact experiences could easily generalize all outcomes, such reduced trust.

Language: Английский

Citations

1

What would qualify an artificial intelligence for moral standing? DOI Creative Commons
Ali Ladak

AI and Ethics, Journal Year: 2023, Volume and Issue: 4(2), P. 213 - 228

Published: Jan. 25, 2023

Abstract What criteria must an artificial intelligence (AI) satisfy to qualify for moral standing? My starting point is that sentient AIs should standing. But future may have unusual combinations of cognitive capacities, such as a high level sophistication without sentience. This raises the question whether sentience necessary criterion standing, or merely sufficient. After reviewing nine been proposed in literature, I suggest there strong case thinking some non-sentient AIs, those are conscious and non-valenced preferences goals, non-conscious sufficiently cognitively complex responding challenges, tentatively argue taking into account uncertainty about which entity strategic considerations how decisions will affect humans other entities, further supports granting standing AIs. highlight three implications: issue AI be more important, terms scale urgency, than if either consciousness necessary; researchers working on policies designed inclusive broaden their scope include all with morally relevant interests; even who think cannot take seriously. However, much these remains, making this important topic research.

Language: Английский

Citations

22

The Moral Psychology of Artificial Intelligence DOI
Ali Ladak, Steve Loughnan, Matti Wilks

et al.

Current Directions in Psychological Science, Journal Year: 2023, Volume and Issue: 33(1), P. 27 - 34

Published: Nov. 30, 2023

Artificial intelligences (AIs), although often perceived as mere tools, have increasingly advanced cognitive and social capacities. In response, psychologists are studying people’s perceptions of AIs moral agents (entities that can do right wrong) patients be targets wrong actions). This article reviews the extent to which people see how they feel about such AIs. We also examine characteristics ourselves affect attributions agency patiency. find multiple factors contribute patiency in AIs, some overlap with morality humans (e.g., mind perception) unique sci-fi fan identity). identify several future directions, including latest generation chatbots likely more being rapidly developed.

Language: Английский

Citations

21

Factors influencing technology use among low-income older adults: A systematic review DOI Creative Commons
Diana Yian Lian Chan, Shaun Wen Huey Lee, Pei‐Lee Teh

et al.

Heliyon, Journal Year: 2023, Volume and Issue: 9(9), P. e20111 - e20111

Published: Sept. 1, 2023

As the world's aging population increases, leveraging technology to support is proving advantageous. Notably, adoption studies among older adults have received increasing scholarly attention, but findings from these do not reflect context of low-income adults. Studies focusing on were relatively few and it remains unclear which factors influence this group's use. This systematic review aims synthesize influencing use provide directions opportunities for future research in information systems. Observing literature through lens Social Cognitive Theory, we identified avenues further integrated framework with Maslow's hierarchy needs elucidate phenomenon. Findings suggest that both personal environmental factors, such as cognitions, affects, sociodemographic characteristics, technological social environment are significant predictors Specifically, related accessibility affordability, income, perceived cost, salient a resource-limited setting. More importantly, usage behavior embeddedness fundamental human plays central role underlying segment. However, more needed understand interaction between person, determinant shaping diverse economic cultural study also sheds light disciplinary gaps lack investigations anchored theoretical foundations, suggests implications practice.

Language: Английский

Citations

19

Optimizing service encounters through mascot-like robot with a politeness strategy DOI
Huixian Zhang, Meng­meng Song

Journal of Retailing and Consumer Services, Journal Year: 2024, Volume and Issue: 79, P. 103864 - 103864

Published: April 24, 2024

Language: Английский

Citations

8

Graphic or short video? The influence mechanism of UGC types on consumers' purchase intention—Take Xiaohongshu as an example DOI
Min Qin, Shanshan Qiu, Yu Zhao

et al.

Electronic Commerce Research and Applications, Journal Year: 2024, Volume and Issue: 65, P. 101402 - 101402

Published: April 27, 2024

Language: Английский

Citations

8

Facilitation or hindrance: The contingent effect of organizational artificial intelligence adoption on proactive career behavior DOI
Hongxia Lin, Jian Tian, Bao Cheng

et al.

Computers in Human Behavior, Journal Year: 2023, Volume and Issue: 152, P. 108092 - 108092

Published: Dec. 14, 2023

Language: Английский

Citations

15