A Taxonomy of Robot Autonomy for Human-Robot Interaction DOI Creative Commons
Seumg-Hyun Kim, Jacy Reese Anthis, Sarah Sebo

и другие.

Опубликована: Март 10, 2024

Robot autonomy is an influential and ubiquitous factor in human-robot interaction (HRI), but it rarely discussed beyond a one-dimensional measure of the degree to which robot operates without human intervention. As robots become more sophisticated, this simple view could be expanded capture variety autonomous behaviors can exhibit match rich literature on philosophy, psychology, other fields. In paper, we conduct systematic review HRI integrate with broader into taxonomy six distinct forms autonomy: those based involvement at runtime (operational autonomy, intentional shared autonomy), before (non-deterministic expressions (cognitive physical autonomy). We discuss future considerations for that emerge from study, including moral consequences, idealization "full" connections agency free will.

Язык: Английский

The Moral Psychology of Artificial Intelligence DOI
Jean‐François Bonnefon, Iyad Rahwan, Azim Shariff

и другие.

Annual Review of Psychology, Год журнала: 2023, Номер 75(1), С. 653 - 675

Опубликована: Сен. 19, 2023

Moral psychology was shaped around three categories of agents and patients: humans, other animals, supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral to deal with: intelligent machines. Machines can perform as agents, making decisions that affect the outcomes human patients or solving dilemmas without supervision. be perceived patients, whose affected by decisions, with important consequences human-machine cooperation. proxies send their delegates interactions use disguise these interactions. Here we review experimental literature on machines proxies, focus recent findings open questions they suggest.

Язык: Английский

Процитировано

49

The need for an empirical research program regarding human–AI relational norms DOI Creative Commons
Madeline G. Reinecke, Andreas Kappes, Sebastian Porsdam Mann

и другие.

AI and Ethics, Год журнала: 2025, Номер 5(1), С. 71 - 80

Опубликована: Янв. 9, 2025

Abstract As artificial intelligence (AI) systems begin to take on social roles traditionally filled by humans, it will be crucial understand how this affects people’s cooperative expectations. In the case of human–human dyads, different relationships are governed norms: For example, two strangers—versus friends or colleagues—should interact when faced with a similar coordination problem often differs. How rise ‘social’ (and ultimately, superintelligent AI) complicate expectations about norms that should govern types relationships, whether human–AI? Do people expect AI adhere same dynamics as humans in given role? Conversely, they certain act more like AI? Here, we consider may pull apart between and human–AI detailing an empirical proposal for mapping these distinctions across relationship types. We see data resulting from our relevant understanding relationship–specific age AI, which also forecast potential resistance towards occupying roles. Finally, can form basis ethical evaluations: What adopt interactions, reinforce through responsible design, depends partly facts what find intuitive such interactions (along costs benefits maintaining these). Toward end paper, discuss relational change over time implications proposed research program.

Язык: Английский

Процитировано

2

Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey DOI
Jacy Reese Anthis, Janet V. T. Pauketat, Ali Ladak

и другие.

Опубликована: Апрель 24, 2025

Язык: Английский

Процитировано

1

The AI Double Standard: Humans Judge All AIs for the Actions of One DOI
Aikaterina Manoli, Janet V. T. Pauketat, Jacy Reese Anthis

и другие.

Proceedings of the ACM on Human-Computer Interaction, Год журнала: 2025, Номер 9(2), С. 1 - 24

Опубликована: Май 2, 2025

Robots and other artificial intelligence (AI) systems are widely perceived as moral agents responsible for their actions. As AI proliferates, these perceptions may become entangled via the spillover of attitudes towards one to AIs. We tested how seemingly harmful immoral actions an or human agent spill over AIs humans in two preregistered experiments. In Study 1 ( N = 720), we established effect human-AI interaction by showing that increased attributions negative agency (i.e., acting immorally) decreased positive morally) patiency deserving concern) both (a chatbot assistant) group which they belong (all assistants). There was no significant difference effects between contexts. 2 684), whether persisted when individuated with a name described human, rather than specifically personal assistant. found context but not context, possibly because were more homogeneous due outgroup status relative humans. This asymmetry suggests double standard whereby judged harshly morally transgresses. With proliferation diverse, autonomous systems, HCI research design should account fact experiences could easily generalize all outcomes, such reduced trust.

Язык: Английский

Процитировано

1

What would qualify an artificial intelligence for moral standing? DOI Creative Commons
Ali Ladak

AI and Ethics, Год журнала: 2023, Номер 4(2), С. 213 - 228

Опубликована: Янв. 25, 2023

Abstract What criteria must an artificial intelligence (AI) satisfy to qualify for moral standing? My starting point is that sentient AIs should standing. But future may have unusual combinations of cognitive capacities, such as a high level sophistication without sentience. This raises the question whether sentience necessary criterion standing, or merely sufficient. After reviewing nine been proposed in literature, I suggest there strong case thinking some non-sentient AIs, those are conscious and non-valenced preferences goals, non-conscious sufficiently cognitively complex responding challenges, tentatively argue taking into account uncertainty about which entity strategic considerations how decisions will affect humans other entities, further supports granting standing AIs. highlight three implications: issue AI be more important, terms scale urgency, than if either consciousness necessary; researchers working on policies designed inclusive broaden their scope include all with morally relevant interests; even who think cannot take seriously. However, much these remains, making this important topic research.

Язык: Английский

Процитировано

23

Factors influencing technology use among low-income older adults: A systematic review DOI Creative Commons
Diana Yian Lian Chan, Shaun Wen Huey Lee, Pei‐Lee Teh

и другие.

Heliyon, Год журнала: 2023, Номер 9(9), С. e20111 - e20111

Опубликована: Сен. 1, 2023

As the world's aging population increases, leveraging technology to support is proving advantageous. Notably, adoption studies among older adults have received increasing scholarly attention, but findings from these do not reflect context of low-income adults. Studies focusing on were relatively few and it remains unclear which factors influence this group's use. This systematic review aims synthesize influencing use provide directions opportunities for future research in information systems. Observing literature through lens Social Cognitive Theory, we identified avenues further integrated framework with Maslow's hierarchy needs elucidate phenomenon. Findings suggest that both personal environmental factors, such as cognitions, affects, sociodemographic characteristics, technological social environment are significant predictors Specifically, related accessibility affordability, income, perceived cost, salient a resource-limited setting. More importantly, usage behavior embeddedness fundamental human plays central role underlying segment. However, more needed understand interaction between person, determinant shaping diverse economic cultural study also sheds light disciplinary gaps lack investigations anchored theoretical foundations, suggests implications practice.

Язык: Английский

Процитировано

21

The Moral Psychology of Artificial Intelligence DOI
Ali Ladak, Steve Loughnan, Matti Wilks

и другие.

Current Directions in Psychological Science, Год журнала: 2023, Номер 33(1), С. 27 - 34

Опубликована: Ноя. 30, 2023

Artificial intelligences (AIs), although often perceived as mere tools, have increasingly advanced cognitive and social capacities. In response, psychologists are studying people’s perceptions of AIs moral agents (entities that can do right wrong) patients be targets wrong actions). This article reviews the extent to which people see how they feel about such AIs. We also examine characteristics ourselves affect attributions agency patiency. find multiple factors contribute patiency in AIs, some overlap with morality humans (e.g., mind perception) unique sci-fi fan identity). identify several future directions, including latest generation chatbots likely more being rapidly developed.

Язык: Английский

Процитировано

21

Optimizing service encounters through mascot-like robot with a politeness strategy DOI
Huixian Zhang, Meng­meng Song

Journal of Retailing and Consumer Services, Год журнала: 2024, Номер 79, С. 103864 - 103864

Опубликована: Апрель 24, 2024

Язык: Английский

Процитировано

8

Graphic or short video? The influence mechanism of UGC types on consumers' purchase intention—Take Xiaohongshu as an example DOI
Min Qin, Shanshan Qiu, Yu Zhao

и другие.

Electronic Commerce Research and Applications, Год журнала: 2024, Номер 65, С. 101402 - 101402

Опубликована: Апрель 27, 2024

Язык: Английский

Процитировано

8

Facilitation or hindrance: The contingent effect of organizational artificial intelligence adoption on proactive career behavior DOI
Hongxia Lin, Jian Tian, Bao Cheng

и другие.

Computers in Human Behavior, Год журнала: 2023, Номер 152, С. 108092 - 108092

Опубликована: Дек. 14, 2023

Язык: Английский

Процитировано

15