Agent-Based Simulation of Crowd Evacuation Through Complex Spaces DOI Creative Commons
Mohamed Chatra,

Mustapha Bourahla

Ingénierie des systèmes d information, Год журнала: 2024, Номер 29(1), С. 83 - 93

Опубликована: Фев. 27, 2024

In this paper, we have developed a description of an agent-based model for simulating the evacuation crowds from complex physical spaces escaping dangerous situations.The describes space containing set differently shaped fences, and obstacles, exit door.The pedestrians comprising crowd moving in order to be evacuated are described as intelligent agents with supervised machine learning using perception-based data perceive particular environment differently.The is Python language where its execution represents simulation.Before simulation, can validated animation written same fix possible problems description.A performance evaluation presented analysis simulation results, showing that these results very encouraging.

Язык: Английский

Toward trustworthy AI with integrative explainable AI frameworks DOI
Bettina Finzel

it - Information Technology, Год журнала: 2025, Номер unknown

Опубликована: Апрель 30, 2025

Abstract As artificial intelligence (AI) increasingly permeates high-stakes domains such as healthcare, transportation, and law enforcement, ensuring its trustworthiness has become a critical challenge. This article proposes an integrative Explainable AI (XAI) framework to address the challenges of interpretability, explainability, interactivity, robustness. By combining XAI methods, incorporating human-AI interaction using suitable evaluation techniques, implementation this serves holistic approach. The discusses framework’s contribution trustworthy gives outlook on open related interdisciplinary collaboration, generalization evaluation.

Язык: Английский

Процитировано

0

Interactive Explainable Anomaly Detection for Industrial Settings DOI

Daniel Gramelt,

Timon Höfer,

Ute Schmid

и другие.

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 133 - 147

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

A typology for exploring the mitigation of shortcut behaviour DOI
Felix Friedrich, Wolfgang Stammer, Patrick Schramowski

и другие.

Nature Machine Intelligence, Год журнала: 2023, Номер 5(3), С. 319 - 330

Опубликована: Март 9, 2023

Язык: Английский

Процитировано

7

Towards Explainable Proactive Robot Interactions for Groups of People in Unstructured Environments DOI Open Access
Tamlin Love, Antonio Andriella, Guillem Alenyà

и другие.

Опубликована: Март 11, 2024

For social robots to be able operate in unstructured public spaces, they need gauge complex factors such as human-robot engagement and inter-person groups, decide how with whom interact. Additionally, should explain their decisions after the fact, improve accountability confidence behavior. To address this, we present a two-layered proactive system that extracts high-level features from low-level perceptions uses these make regarding initiation maintenance of human robot interactions. With this outlined, primary focus work is then novel method generate counterfactual explanations response variety contrastive queries. We provide an early proof concept illustrate can generated by leveraging two-layer system.

Язык: Английский

Процитировано

2

An Explanatory Model Steering System for Collaboration between Domain Experts and AI DOI
Aditya Bhattacharya, Simone Stumpf, Katrien Verbert

и другие.

Опубликована: Июнь 27, 2024

With the increasing adoption of Artificial Intelligence (AI) systems in high-stake domains, such as healthcare, effective collaboration between domain experts and AI is imperative. To facilitate systems, we introduce an Explanatory Model Steering system that allows to steer prediction models using their knowledge. The includes explanation dashboard combines different types data-centric model-centric explanations be steered through manual automated data configuration approaches. It apply prior knowledge for configuring underlying training refining models. Additionally, our model steering has been evaluated a healthcare-focused scenario with 174 healthcare three extensive user studies. Our findings highlight importance involving during steering, ultimately leading improved human-AI collaboration.

Язык: Английский

Процитировано

2

Towards a neuro-symbolic cycle for human-centered explainability DOI Creative Commons
Alessandra Mileo

Deleted Journal, Год журнала: 2024, Номер unknown, С. 1 - 13

Опубликована: Авг. 28, 2024

Deep learning is being very successful in supporting humans the interpretation of complex data (such as images and text) for critical decision tasks. However, it still remains difficult human experts to understand how such results are achieved, due “black box” nature deep models used. In high-stake making scenarios medical imaging diagnostics, a lack transparency hinders adoption these techniques practice. this position paper we present conceptual methodology design neuro-symbolic cycle address need explainability confidence (including trust) when used support making, discuss challenges opportunities implementation well its real world scenarios. We elaborate on leverage potential hybrid artificial intelligence combining neural symbolic reasoning human-centered approach explainability. advocate that phases should include i) extraction knowledge from trained network represent encode behaviour, ii) validation extracted through commonsense domain knowledge, iii) generation explanations experts, iv) ability map feedback into validated representation i), v) injection some non-trained enable knowledge-informed learning. The holistic combination causality, expressive logical inference, learning, would result seamless integration (neural) (cognitive) makes possible retain access inherently explainable without losing power representation. involvement design, process crucial, paves way new human–ai paradigm where role goes beyond labeling data, towards neural-cognitive processes.

Язык: Английский

Процитировано

2

Studying How to Efficiently and Effectively Guide Models with Explanations DOI
Sukrut Rao, Moritz Böhle, Amin Parchami-Araghi

и другие.

2021 IEEE/CVF International Conference on Computer Vision (ICCV), Год журнала: 2023, Номер unknown, С. 1922 - 1933

Опубликована: Окт. 1, 2023

Despite being highly performant, deep neural networks might base their decisions on features that spuriously correlate with the provided labels, thus hurting generalization. To mitigate this, 'model guidance' has recently gained popularity, i.e. idea of regularizing models' explanations to ensure they are "right for right reasons" [49]. While various techniques achieve such model guidance have been proposed, experimental validation these approaches far limited relatively simple and / or synthetic datasets. better understand effectiveness design choices explored in context guidance, this work we conduct an in-depth evaluation across loss functions, attribution methods, models, 'guidance depths' PASCAL VOC 2007 MS COCO 2014 As annotation costs can limit its applicability, also place a particular focus efficiency. Specifically, guide models via bounding box annotations, which much cheaper obtain than commonly used segmentation masks, evaluate robustness under (e.g. only 1% annotated images) overly coarse annotations. Further, propose using EPG score as additional metric function ('Energy loss'). We show optimizing Energy leads exhibit distinct object-specific features, despite annotations include background regions. Lastly, improve generalization distribution shifts. Code available at: https://github.com/sukrutrao/Model-Guidance

Язык: Английский

Процитировано

5

Benefits of Human-AI Interaction for Expert Users Interacting with Prediction Models: a Study on Marathon Running DOI Creative Commons
Heleen Muijlwijk, Martijn C. Willemsen, Barry Smyth

и другие.

Опубликована: Март 18, 2024

Users with large domain knowledge can be reluctant to use prediction models. This also applies the sports domain, where running coaches rarely rely on marathon tools for race-plan advice their runners' next marathon. paper studies effect of adding interactivity such models, incorporate and acknowledge users' knowledge. In think-aloud sessions an online study, we tested interactive machine learning tool that allowed indicate importance earlier races feeding into model. Our results show deploy rich when working model runners familiar them, adaptations improved accuracy. Those who could interact displayed more trust acceptance in resulting predictions.

Язык: Английский

Процитировано

1

Representation Debiasing of Generated Data Involving Domain Experts DOI
Aditya Bhattacharya, Simone Stumpf, Katrien Verbert

и другие.

Опубликована: Июнь 27, 2024

Biases in Artificial Intelligence (AI) or Machine Learning (ML) systems due to skewed datasets problematise the application of prediction models practice. Representation bias is a prevalent form found majority datasets. This arises when training data inadequately represents certain segments space, resulting poor generalisation models. Despite AI practitioners employing various methods mitigate representation bias, their effectiveness often limited lack thorough domain knowledge. To address this limitation, paper introduces human-in-the-loop interaction approaches for debiasing generated involving experts. Our work advocates controlled generation process experts effectively effects bias. We argue that can leverage expertise assess how affects Moreover, our facilitate steering augmentation algorithms produce debiased augmented and validate refine samples reduce also discuss these be leveraged designing developing user-centred impact through effective collaboration between AI.

Язык: Английский

Процитировано

1

Using Explanations to Guide Models DOI Creative Commons
Sukrut Rao, Moritz Böhle, Amin Parchami-Araghi

и другие.

arXiv (Cornell University), Год журнала: 2023, Номер unknown

Опубликована: Янв. 1, 2023

Despite being highly performant, deep neural networks might base their decisions on features that spuriously correlate with the provided labels, thus hurting generalization. To mitigate this, 'model guidance' has recently gained popularity, i.e. idea of regularizing models' explanations to ensure they are "right for right reasons". While various techniques achieve such model guidance have been proposed, experimental validation these approaches far limited relatively simple and / or synthetic datasets. better understand effectiveness design choices explored in context guidance, this work we conduct an in-depth evaluation across loss functions, attribution methods, models, 'guidance depths' PASCAL VOC 2007 MS COCO 2014 As annotation costs can limit its applicability, also place a particular focus efficiency. Specifically, guide models via bounding box annotations, which much cheaper obtain than commonly used segmentation masks, evaluate robustness under (e.g. only 1% annotated images) overly coarse annotations. Further, propose using EPG score as additional metric function ('Energy loss'). We show optimizing Energy leads exhibit distinct object-specific features, despite annotations include background regions. Lastly, improve generalization distribution shifts. Code available at: https://github.com/sukrutrao/Model-Guidance.

Язык: Английский

Процитировано

3