Multimodal Co-Construction of Explanations with XAI Workshop DOI
Hendrik Buschmeier, Teena Hassan, Stefan Kopp

и другие.

Опубликована: Окт. 30, 2024

Язык: Английский

How informative is your XAI? Assessing the quality of explanations through information power DOI Creative Commons
Marco Matarese, Francesco Rea, Katharina J. Rohlfing

и другие.

Frontiers in Computer Science, Год журнала: 2025, Номер 6

Опубликована: Янв. 8, 2025

A growing consensus emphasizes the efficacy of user-centered and personalized approaches within field explainable artificial intelligence (XAI). The proliferation diverse explanation strategies in recent years promises to improve interaction between humans agents. This poses challenge assessing goodness proposed explanation, which so far has primarily relied on indirect measures, such as user's task performance. We introduce an assessment designed objectively quantitatively measure XAI systems, specifically terms their “information power.” metric aims evaluate amount information system provides non-expert users during interaction. work a three-fold objective: propose Information Power task, provide comparison our proposal other measures with respect eight characteristics, detailed instructions implement it based researchers' needs.

Язык: Английский

Процитировано

0

A multi-modal explainability approach for human-aware robots in multi-party conversation DOI Creative Commons
Iveta Bečková,

Štefan Pócoš,

Giulia Belgiovine

и другие.

Computer Vision and Image Understanding, Год журнала: 2025, Номер unknown, С. 104304 - 104304

Опубликована: Фев. 1, 2025

Язык: Английский

Процитировано

0

An Exploration of Trust in Human-Robot Interaction: From Measurement to Repair Strategies and Design Principles DOI

Fatima Ayoub,

Aphra Kerr, Rudi Villing

и другие.

Springer proceedings in advanced robotics, Год журнала: 2025, Номер unknown, С. 58 - 72

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Becoming dehumanized by a service robot: An empirical examination of what happens when non-humans perceive us as less than full humans DOI Creative Commons
Magnus Söderlund

Computers in Human Behavior Artificial Humans, Год журнала: 2025, Номер 4, С. 100163 - 100163

Опубликована: Май 1, 2025

Язык: Английский

Процитировано

0

Social bonding between humans, animals, and robots: Dogs outperform AIBOs, their robotic replicas, as social companions DOI Creative Commons

Stella Klumpe,

Kelsey Mitchell,

Emma Cox

и другие.

PLoS ONE, Год журнала: 2025, Номер 20(6), С. e0324312 - e0324312

Опубликована: Июнь 3, 2025

In the evolving landscape of technology, robots have emerged as social companions, prompting an investigation into bonding between humans and robots. While human-animal interactions are well-studied, human-robot (HRI) remain comparatively underexplored. Ethorobotics, a field robotic engineering based on ecology ethology, suggests designing companion modeled animal which simpler to emulate than humans. However, it is unclear whether these can match companionship provided by their original models. This study examined AIBOs, dog-inspired robots, compared real dogs. Nineteen female participants engaged in 12 affiliative with dogs AIBOs across two counter-balanced, one-month phases. Social was assessed through urinary oxytocin (OXT) level change over interaction, self-reported attachment using adapted version Lexington Attachment Pets Scale, evaluations administering Robot-Dog Questionnaire. To examine OXT changes comparing we conducted mixed-effects model analyses planned follow-up comparisons. Frequency comparison, binary logistic regression, thematic analysis were performed analyze evaluations. Results revealed significant differences fostering bonds. increased during but decreased AIBOs. Participants reported stronger rated them better companions. These findings highlight current limitations immediately Our contributes growing HRI research demonstrating existing gap It highlights need for further understand complexities essential implement successful applications diverse domains such elderly health care, education, entertainment.

Язык: Английский

Процитировано

0

Service robot verbalization in service processes with moral implications and its impact on satisfaction DOI Creative Commons
Magnus Söderlund

Technological Forecasting and Social Change, Год журнала: 2023, Номер 196, С. 122831 - 122831

Опубликована: Сен. 12, 2023

Service robots are expected to become increasingly common. As their capabilities more advanced, it is also that they would be involved in tasks for which a human user want know why do what doing. One way accomplish this program so verbalize (i.e., thinking aloud) while providing service. This ability likely particularly useful involve behavioral norms. The present study used an experimental design manipulate the level of robot's motivations its behavior (low vs. high) was asked by carry out task with moral implications. results show robot verbalizing contributed positively satisfaction performance, and impact mediated understandability, perceived morality intellectual stimulation.

Язык: Английский

Процитировано

8

Never Trust Anything That Can Think for Itself, if You Can’t Control Its Privacy Settings: The Influence of a Robot’s Privacy Settings on Users’ Attitudes and Willingness to Self-disclose DOI Creative Commons
Julia G. Stapels,

Angelika Penner,

Niels Diekmann

и другие.

International Journal of Social Robotics, Год журнала: 2023, Номер 15(9-10), С. 1487 - 1505

Опубликована: Сен. 22, 2023

Abstract When encountering social robots, potential users are often facing a dilemma between privacy and utility. That is, high utility comes at the cost of lenient settings, allowing robot to store personal data connect internet permanently, which brings in associated security risks. However, date, it still remains unclear how this affects attitudes behavioral intentions towards respective robot. To shed light on influence robot’s settings robot-related intentions, we conducted two online experiments with total sample N = 320 German university students. We hypothesized that strict compared would result more favorable Experiment 1. For 2, expected for choosing independently comparison evaluating preset settings. those manipulations seemed diverging domains: While increased trust, decreased subjective ambivalence willingness self-disclose choice primarily impact likeability, contact depth self-disclosure. Strict might reduce risk thereby also risk-related increase trust-dependent intentions. if allowed choose, people make ‘their own’, through making privacy-utility tradeoff. This tradeoff is likely compromise full thus does not risks robot-contact as much do. Future should replicate these results using real-life human interaction different scenarios further investigate psychological mechanisms causing such divergences.

Язык: Английский

Процитировано

6

Using AI Planning for Managing Affective States in Social Robotics DOI Creative Commons
Alan Lindsay, Andrés A. Ramírez-Duque, Mary Ellen Foster

и другие.

Опубликована: Март 11, 2024

Social robotics has recently focused on developing AI agents that recognise and respond to human emotions. The use of plan-based approaches is promising, especially in domains where collecting data advance challenging (e.g., medical domains). However, we observe the appropriate user' affective state will vary with particular interaction, expected impact robot's behaviours user, opportunity accuracy sensing. We there are different ways modelling user's state, choice take into consideration relationship between behaviour. propose alternative methods lessons learnt from a recent project order discuss relevant factors each approach. simulated demonstrate flexibility model-based generation interaction strategies.

Язык: Английский

Процитировано

2

¿Tramposo e injusto? Entonces, es humano. Robots sociales educativos y ética sintética DOI Creative Commons
María Isabel Gómez-León

Revista Tecnología Ciencia y Educación, Год журнала: 2024, Номер unknown, С. 167 - 186

Опубликована: Янв. 4, 2024

La educación comienza a hacer uso de la inteligencia artificial emocional través robots educativos antropomorfizados. evidencia respalda que los estudiantes (hombres y mujeres) son capaces crear vínculos emocionales con estos agentes. Sin embargo, cada vez se están encontrando más casos desinhibición abusiva en este tipo interacciones, como degradaciones racistas o sexistas, abuso poder violencia. Algunos investigadores alertan sobre las consecuencias negativas conductas pueden tener largo plazo, tanto para ética aprenden estas conductas. A pesar su relevancia desde una perspectiva social educativa, existen pocos estudios intenten comprender mecanismos subyacen prácticas inmorales colectivamente dañinas. El objetivo artículo es revisar analizar investigaciones han tratado estudiar el comportamiento antiético del ser humano interacción sociales antropomórficos. Se realizó un estudio bibliométrico descriptivo siguiendo criterios declaración PRISMA. Los resultados muestran que, bajo ciertas circunstancias, antropomorfización atribución intencionalidad agentes robóticos podría desventajosa, provocando actitudes rechazo, deshumanización e incluso visión realista capacidades limitaciones guían conducta humana ayudar aprovechar gran potencial esta tecnología promover desarrollo moral conciencia estudiantes.

Процитировано

1

More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human–Automation Interaction DOI
Shihong Ling, Yutong Zhang, Na Du

и другие.

Human Factors The Journal of the Human Factors and Ergonomics Society, Год журнала: 2024, Номер 66(12), С. 2606 - 2620

Опубликована: Март 4, 2024

The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence explanations assessing their impacts on performance, trust, preference, eye-tracking behaviors human-automation interaction.

Язык: Английский

Процитировано

1