Agent-Based Simulation of Crowd Evacuation Through Complex Spaces DOI Creative Commons
Mohamed Chatra,

Mustapha Bourahla

Ingénierie des systèmes d information, Год журнала: 2024, Номер 29(1), С. 83 - 93

Опубликована: Фев. 27, 2024

In this paper, we have developed a description of an agent-based model for simulating the evacuation crowds from complex physical spaces escaping dangerous situations.The describes space containing set differently shaped fences, and obstacles, exit door.The pedestrians comprising crowd moving in order to be evacuated are described as intelligent agents with supervised machine learning using perception-based data perceive particular environment differently.The is Python language where its execution represents simulation.Before simulation, can validated animation written same fix possible problems description.A performance evaluation presented analysis simulation results, showing that these results very encouraging.

Язык: Английский

LIMEADE: From AI Explanations to Advice Taking DOI Creative Commons
Benjamin Charles Germain Lee, Doug Downey, Kyle Lo

и другие.

ACM Transactions on Interactive Intelligent Systems, Год журнала: 2023, Номер 13(4), С. 1 - 29

Опубликована: Март 28, 2023

Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods allow to take advice from humans response explanations are similarly useful. While both capabilities well developed for transparent learning models (e.g., linear and GA 2 Ms) recent techniques LIME SHAP) generate opaque models, little attention been given methods models. This article introduces LIMEADE, first general framework translates positive negative (expressed using high-level vocabulary such as employed by post hoc explanations) into an update arbitrary, underlying model. We demonstrate generality our approach with case studies on 70 real-world across two broad domains: image classification text recommendation. show method improves accuracy compared a rigorous baseline domains. For modality, we apply neural recommender system scientific papers public website; user study shows leads significantly higher perceived control, trust, satisfaction.

Язык: Английский

Процитировано

2

The Thousand Faces of Explainable AI Along the Machine Learning Life Cycle: Industrial Reality and Current State of Research DOI
Thomas Decker, Ralf Gross,

Alexander Koebler

и другие.

Lecture notes in computer science, Год журнала: 2023, Номер unknown, С. 184 - 208

Опубликована: Янв. 1, 2023

Язык: Английский

Процитировано

2

Interactive Explanations by Conflict Resolution via Argumentative Exchanges DOI Open Access
Antonio Rago,

Hengzhi Li,

Francesca Toni

и другие.

Опубликована: Июль 31, 2023

As the field of explainable AI (XAI) is maturing, calls for interactive explanations (the outputs of) models are growing, but state-of-the-art predominantly focuses on static explanations. In this paper, we focus instead framed as conflict resolution between agents (i.e. and/or humans) by leveraging computational argumentation. Specifically, define Argumentative eXchanges (AXs) dynamically sharing, in multi-agent systems, information harboured individual agents’ quantitative bipolar argumentation frameworks towards resolving conflicts amongst agents. We then deploy AXs XAI setting which a machine and human interact about machine’s predictions. identify assess several theoretical properties characterising that suitable XAI. Finally, instantiate defining various agent behaviours, e.g. capturing counterfactual patterns reasoning machines highlighting effects cognitive biases humans. show experimentally (in simulated environment) comparative advantages these behaviours terms resolution, strongest argument may not always be most effective.

Язык: Английский

Процитировано

2

Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning DOI Creative Commons
Emanuele Marconato, Andrea Passerini, Stefano Teso

и другие.

Entropy, Год журнала: 2023, Номер 25(12), С. 1574 - 1574

Опубликована: Ноя. 22, 2023

Research on Explainable Artificial Intelligence has recently started exploring the idea of producing explanations that, rather than being expressed in terms low-level features, are encoded interpretable concepts learned from data. How to reliably acquire such is, however, still fundamentally unclear. An agreed-upon notion concept interpretability is missing, with result that used by both post hoc explainers and concept-based neural networks acquired through a variety mutually incompatible strategies. Critically, most these neglect human side problem: representation understandable only insofar as it can be understood at receiving end. The key challenge human-interpretable learning (hrl) how model operationalize this element. In work, we propose mathematical framework for acquiring representations suitable networks. Our formalization hrl builds recent advances causal explicitly models stakeholder an external observer. This allows us derive principled alignment between machine's vocabulary human. doing so, link simple intuitive name transfer game, clarify relationship well-known property representations, namely disentanglement. We also show linked issue undesirable correlations among concepts, known leakage, content-style separation, all general information-theoretic reformulation properties. conceptualization aims bridge gap algorithmic sides establish stepping stone new research representations.

Язык: Английский

Процитировано

2

Agent-Based Simulation of Crowd Evacuation Through Complex Spaces DOI Creative Commons
Mohamed Chatra,

Mustapha Bourahla

Ingénierie des systèmes d information, Год журнала: 2024, Номер 29(1), С. 83 - 93

Опубликована: Фев. 27, 2024

In this paper, we have developed a description of an agent-based model for simulating the evacuation crowds from complex physical spaces escaping dangerous situations.The describes space containing set differently shaped fences, and obstacles, exit door.The pedestrians comprising crowd moving in order to be evacuated are described as intelligent agents with supervised machine learning using perception-based data perceive particular environment differently.The is Python language where its execution represents simulation.Before simulation, can validated animation written same fix possible problems description.A performance evaluation presented analysis simulation results, showing that these results very encouraging.

Язык: Английский

Процитировано

0