Explainable AI: roles and stakeholders, desirements and challenges DOI Creative Commons
Robert R. Hoffman, Shane T. Mueller, Gary Klein

et al.

Frontiers in Computer Science, Journal Year: 2023, Volume and Issue: 5

Published: Aug. 17, 2023

Introduction The purpose of the Stakeholder Playbook is to enable developers explainable AI systems take into account different ways in which stakeholders or role-holders need “look inside” AI/XAI systems. Method We conducted structured cognitive interviews with senior and mid-career professionals who had direct experience either developing using and/or autonomous Results results show that access others (e.g., trusted engineers vendors) for them be able develop satisfying mental models They know how it fails misleads as much they works. Some an understanding enables explain someone else not just satisfy their own sense-making requirements. Only about half our interviewees said always wanted explanations even needed better than ones were provided. Based on empirical evidence, we created a “Playbook” lists explanation desires, challenges, cautions variety stakeholder groups roles. Discussion This other findings seem surprising, if paradoxical, but can resolved by acknowledging have differing skill sets desires. Individuals often serve multiple roles and, therefore, immediate goals. goal help XAI guiding development process creating support

Language: Английский

Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions DOI Creative Commons
Luca Longo, Mario Brčić, Federico Cabitza

et al.

Information Fusion, Journal Year: 2024, Volume and Issue: 106, P. 102301 - 102301

Published: Feb. 15, 2024

Understanding black box models has become paramount as systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications. In response, Explainable AI (XAI) emerged a field of research with practical and ethical benefits across various domains. This paper highlights the advancements XAI its application scenarios addresses ongoing challenges within XAI, emphasizing need for broader perspectives collaborative efforts. We bring together experts from fields identify open problems, striving synchronize agendas accelerate By fostering discussion interdisciplinary cooperation, we aim propel forward, contributing continued success. develop comprehensive proposal advancing XAI. To achieve this goal, present manifesto 28 problems categorized into nine categories. These encapsulate complexities nuances offer road map future research. For each problem, provide promising directions hope harnessing collective intelligence interested stakeholders.

Language: Английский

Citations

165

Explainable Artificial Intelligence (XAI): Concepts and Challenges in Healthcare DOI Creative Commons
Tim Hulsen

AI, Journal Year: 2023, Volume and Issue: 4(3), P. 652 - 666

Published: Aug. 10, 2023

Artificial Intelligence (AI) describes computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. Examples of AI techniques are machine learning, neural networks, deep learning. can be applied in many different areas, econometrics, biometry, e-commerce, the automotive industry. In recent years, has found its way into healthcare well, helping doctors make better decisions (“clinical decision support”), localizing tumors magnetic resonance images, reading analyzing reports written by radiologists pathologists, much more. However, one big risk: it perceived a “black box”, limiting trust reliability, which is very issue an area mean life or death. As result, term Explainable (XAI) been gaining momentum. XAI tries ensure algorithms (and resulting decisions) understood humans. this narrative review, we will have look at some central concepts XAI, describe several challenges around healthcare, discuss whether really help advance, for example, increasing understanding trust. Finally, alternatives increase discussed, well future research possibilities XAI.

Language: Английский

Citations

112

Application of explainable artificial intelligence in medical health: A systematic review of interpretability methods DOI Creative Commons
Shahab S. Band,

Atefeh Yarahmadi,

Chung-Chian Hsu

et al.

Informatics in Medicine Unlocked, Journal Year: 2023, Volume and Issue: 40, P. 101286 - 101286

Published: Jan. 1, 2023

This paper investigates the applications of explainable AI (XAI) in healthcare, which aims to provide transparency, fairness, accuracy, generality, and comprehensibility results obtained from ML algorithms decision-making systems. The black box nature systems has remained a challenge interpretable techniques can potentially address this issue. Here we critically review previous studies related interpretability methods medical Descriptions various types XAI such as layer-wise relevance propagation (LRP), Uniform Manifold Approximation Projection (UMAP), Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), ANCHOR, contextual importance utility (CIU), Training calibration-based explainers (TraCE), Gradient-weighted Class Activation Mapping (Grad-CAM), t-distributed Stochastic Neighbor Embedding (t-SNE), NeuroXAI, Explainable Cumulative Fuzzy Membership Criterion (X-CFCMC) along with diseases be explained through these are provided throughout paper. also discusses how technologies transform healthcare services. usability reliability presented summarized, including on XGBoost for mediastinal cysts tumors, 3D brain tumor segmentation network, TraCE method image analysis. Overall, contribute growing field insights researchers, practitioners, decision-makers industry. Finally, discuss performance applied health care It is needed mention that brief implemented methodology section.

Language: Английский

Citations

110

Exploring artificial intelligence for applications of drones in forest ecology and management DOI Creative Commons
Alexander Buchelt, Alexander Adrowitzer, Peter Kieseberg

et al.

Forest Ecology and Management, Journal Year: 2023, Volume and Issue: 551, P. 121530 - 121530

Published: Nov. 9, 2023

This paper highlights the significance of Artificial Intelligence (AI) in realm drone applications forestry. Drones have revolutionized various forest operations, and their role mapping, monitoring, inventory procedures is explored comprehensively. Leveraging advanced imaging technologies data processing techniques, drones enable real-time tracking changes forested landscapes, facilitating effective monitoring threats such as fire outbreaks pest infestations. They expedite by swiftly surveying large areas, providing precise on tree species identification, size estimation, health assessment, thus supporting informed decision-making sustainable management practices. Moreover, contribute to planting, pruning, harvesting, while reforestation efforts real-time. Wildlife also enhanced, aiding identification conservation concerns informing targeted strategies. offer a safer more efficient alternative search rescue operations within dense forests, reducing response time improving outcomes. Additionally, equipped with thermal cameras early detection wildfires, enabling timely response, mitigation, preservation efforts. The integration AI holds immense potential for enhancing forestry practices contributing land management. In future explainable (XAI) improves trust safety transparency decision-making, liability issues, operations. XAI facilitates better environmental impact analysis, If drone's can explain its actions, it will be easier understand why chose particular path or action, which could inform improvements.

Language: Английский

Citations

58

Harnessing eXplainable artificial intelligence for feature selection in time series energy forecasting: A comparative analysis of Grad-CAM and SHAP DOI Creative Commons
Corne van Zyl, Xianming Ye, Raj Naidoo

et al.

Applied Energy, Journal Year: 2023, Volume and Issue: 353, P. 122079 - 122079

Published: Oct. 17, 2023

This study investigates the efficacy of Explainable Artificial Intelligence (XAI) methods, specifically Gradient-weighted Class Activation Mapping (Grad-CAM) and Shapley Additive Explanations (SHAP), in feature selection process for national demand forecasting. Utilising a multi-headed Convolutional Neural Network (CNN), both XAI methods exhibit capabilities enhancing forecasting accuracy model efficiency by identifying eliminating irrelevant features. Comparative analysis revealed Grad-CAM's exceptional computational high-dimensional applications SHAP's superior ability revealing features that degrade forecast accuracy. However, limitations are found with Grad-CAM including decrease stability, SHAP inaccurately ranking significant Future research should focus on refining these to overcome further probe into other methods' applicability within time-series domain. underscores potential improving load forecasting, which can contribute significantly development more interpretative, accurate efficient models.

Language: Английский

Citations

54

Post-hoc vs ante-hoc explanations: xAI design guidelines for data scientists DOI Creative Commons
Carl Orge Retzlaff, Alessa Angerschmid, Anna Saranti

et al.

Cognitive Systems Research, Journal Year: 2024, Volume and Issue: 86, P. 101243 - 101243

Published: May 6, 2024

The growing field of explainable Artificial Intelligence (xAI) has given rise to a multitude techniques and methodologies, yet this expansion created gap between existing xAI approaches their practical application. This poses considerable obstacle for data scientists striving identify the optimal technique needs. To address problem, our study presents customized decision support framework aid in choosing suitable approach use-case. Drawing from literature survey insights interviews with five experienced scientists, we introduce tree based on trade-offs inherent various approaches, guiding selection six commonly used tools. Our work critically examines prevalent ante-hoc post-hoc methods, assessing applicability real-world contexts through expert interviews. aim is equip policymakers capacity select methods that not only demystify decision-making process, but also enrich user understanding interpretation, ultimately advancing application settings.

Language: Английский

Citations

49

The EU Artificial Intelligence Act (2024): Implications for healthcare DOI Creative Commons
Hannah van Kolfschooten, Janneke van Oirschot

Health Policy, Journal Year: 2024, Volume and Issue: 149, P. 105152 - 105152

Published: Sept. 7, 2024

Language: Английский

Citations

23

Explainable AI and Causal Understanding: Counterfactual Approaches Considered DOI Creative Commons
Sam Baron

Minds and Machines, Journal Year: 2023, Volume and Issue: 33(2), P. 347 - 377

Published: June 1, 2023

Abstract The counterfactual approach to explainable AI (XAI) seeks provide understanding of systems through the provision explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that does not clearly causal . They diagnose problem in terms underlying framework within which has been developed. To date, developed concert with for specifying causes by Pearl (Causality: Models, reasoning, and inference. Cambridge University Press, 2000) Woodward (Making things happen: A theory explanation. Oxford 2003). this paper, I build on al.’s work applying Pearl-Woodward approach. standard XAI is capable delivering understanding, but there are limitations its capacity do so. suggest way overcome these limitations.

Language: Английский

Citations

32

Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK DOI Open Access
Luca Nannini, Agathe Balayn, Adam Leon Smith

et al.

2022 ACM Conference on Fairness, Accountability, and Transparency, Journal Year: 2023, Volume and Issue: unknown, P. 1198 - 1212

Published: June 12, 2023

Public attention towards explainability of artificial intelligence (AI) systems has been rising in recent years to offer methodologies for human oversight. This translated into the proliferation research outputs, such as from Explainable AI, enhance transparency and control system debugging monitoring, intelligibility process output user services. Yet, outputs are difficult adopt on a practical level due lack common regulatory baseline, contextual nature explanations. Governmental policies now attempting tackle exigence, however it remains unclear what extent published communications, regulations, standards an informed perspective support research, industry, civil interests. In this study, we perform first thematic gap analysis plethora EU, US, UK. Through rigorous survey policy documents, contribute overview governmental trajectories within AI its sociotechnical impacts. We find that often by coarse notions requirements might be willingness conciliate explanations foremost risk management tool oversight, but also consensus constitutes valid algorithmic explanation, how feasible implementation deployment across stakeholders organization. Informed then conduct existing policies, which leads us formulate set recommendations address regulations systems, especially discussing definition, feasibility, usability explanations, well allocating accountability explanation providers.

Language: Английский

Citations

29

Do stakeholder needs differ? - Designing stakeholder-tailored Explainable Artificial Intelligence (XAI) interfaces DOI Creative Commons
Minjung Kim,

Saebyeol Kim,

Jin Woo Kim

et al.

International Journal of Human-Computer Studies, Journal Year: 2023, Volume and Issue: 181, P. 103160 - 103160

Published: Sept. 23, 2023

Explainable AI (XAI) is increasingly being used in the healthcare domain. In health management, clinicians and patients are critical stakeholders, requiring tailored XAI explanations based on their unique needs. Our study investigates differences explanation needs between designs corresponding interfaces for each group. Using a scenario-based approach, we assessed stakeholder-tailored needs, analyzed differences, designed using theoretical frameworks. The results demonstrate diverse stakeholder motivations seeking explanations, leading to varied requirements. effectively address these requirements, as validated by preference selection qualitative feedback from patients. Their suggestions provide design insights highlight divergent of groups. This contributes practical implications research, emphasizing importance understanding incorporating relevant concepts into user-centered interface design.

Language: Английский

Citations

24