Explainable Image Classification: The Journey So Far and the Road Ahead DOI Creative Commons
Vidhya Kamakshi, Narayanan C. Krishnan

AI, Journal Year: 2023, Volume and Issue: 4(3), P. 620 - 651

Published: Aug. 1, 2023

Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide comprehensive analysis of existing approaches in field XAI, focusing on tradeoff between model accuracy and interpretability. Motivated need tradeoff, conduct an extensive review literature, presenting multi-view taxonomy that offers new perspective XAI methodologies. We analyze various sub-categories methods, considering their strengths, weaknesses, practical challenges. Moreover, explore causal relationships explanations discuss dedicated explaining cross-domain classifiers. The latter is particularly important scenarios where training test data are sampled from different distributions. Drawing insights our analysis, propose future directions, including exploring explainable allied paradigms, developing evaluation metrics for both traditionally trained learning-based classifiers, applying neural architectural search techniques minimize accuracy–interpretability tradeoff. This paper provides overview state-of-the-art serving valuable resource researchers practitioners interested understanding advancing field.

Language: Английский

Artificial intelligence capability and organizational performance: unraveling the mediating mechanisms of decision-making processes DOI
Suheil Neiroukh, Okechukwu Lawrence Emeagwali, Hasan Yousef Aljuhmani

et al.

Management Decision, Journal Year: 2024, Volume and Issue: unknown

Published: June 12, 2024

Purpose This study investigates the profound impact of artificial intelligence (AI) capabilities on decision-making processes and organizational performance, addressing a crucial gap in literature by exploring mediating role speed quality. Design/methodology/approach Drawing upon resource-based theory prior research, this constructs comprehensive model hypotheses to illuminate influence AI within organizations speed, decision quality, and, ultimately, performance. A dataset comprising 230 responses from diverse forms basis analysis, with employing partial least squares structural equation (PLS-SEM) for robust data examination. Findings The results demonstrate pivotal shaping capability significantly positively affects overall Notably, is critical factor contributing enhanced further uncovered mediation effects, suggesting that partially mediate relationship between performance through speed. Originality/value contributes existing body providing empirical evidence multifaceted Elucidating advances our understanding complex mechanisms which drive success.

Language: Английский

Citations

21

First-principles and machine-learning approaches for interpreting and predicting the properties of MXenes DOI Creative Commons
José D. Gouveia, Tiago L. P. Galvão, Kais Iben Nassar

et al.

npj 2D Materials and Applications, Journal Year: 2025, Volume and Issue: 9(1)

Published: Feb. 1, 2025

MXenes are a versatile family of 2D inorganic materials with applications in energy storage, shielding, sensing, and catalysis. This review highlights computational studies using density functional theory machine-learning approaches to explore their structure (stacking, functionalization, doping), properties (electronic, mechanical, magnetic), application potential. Key advances challenges critically examined, offering insights into applying research transition these from the lab practical use.

Language: Английский

Citations

4

OCT-based diagnosis of glaucoma and glaucoma stages using explainable machine learning DOI Creative Commons
Md. Mahmudul Hasan, Jack Phu, Henrietta Wang

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: Jan. 28, 2025

Abstract Glaucoma poses a growing health challenge projected to escalate in the coming decades. However, current automated diagnostic approaches on diagnosis solely rely black-box deep learning models, lacking explainability and trustworthiness. To address issue, this study uses optical coherence tomography (OCT) images develop an explainable artificial intelligence (XAI) tool for diagnosing staging glaucoma, with focus its clinical applicability. A total of 334 normal 268 glaucomatous eyes (86 early, 72 moderate, 110 advanced) were included, signal processing theory was employed, model interpretability rigorously evaluated. Leveraging SHapley Additive exPlanations (SHAP)-based global feature ranking partial dependency analysis (PDA) estimated decision boundary cut-offs machine (ML) novel algorithm developed implement XAI tool. Using selected features, ML models produce AUC 0.96 (95% CI: 0.95–0.98), 0.98 0.96–1.00) 1.00 1.00–1.00) respectively differentiating moderate advanced glaucoma patients. Overall, outperformed clinicians early stage overall 10.4 –11.2% higher accuracy. The user-friendly software shows potential as valuable eye care practitioners, offering transparent interpretable insights improve decision-making.

Language: Английский

Citations

3

The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study DOI Creative Commons
Jonas Wanner, Lukas-Valentin Herm, Kai Heinrich

et al.

Electronic Markets, Journal Year: 2022, Volume and Issue: 32(4), P. 2079 - 2102

Published: Oct. 23, 2022

Abstract Contemporary decision support systems are increasingly relying on artificial intelligence technology such as machine learning algorithms to form intelligent systems. These have human-like capacity for selected applications based a rationale which cannot be looked-up conveniently and constitutes black box. As consequence, acceptance by end-users remains somewhat hesitant. While lacking transparency has been said hinder trust enforce aversion towards these systems, studies that connect user subsequently scarce. In response, our research is concerned with the development of theoretical model explains end-user We utilize unified theory use in information well explanation related theories initial The proposed tested an industrial maintenance workplace scenario using experts participants represent group. Results show performance-driven at first sight. However, plays important indirect role regulating perception performance.

Language: Английский

Citations

55

Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements DOI Creative Commons
Laith Alzubaidi, Aiman Al-Sabaawi, Jinshuai Bai

et al.

International Journal of Intelligent Systems, Journal Year: 2023, Volume and Issue: 2023, P. 1 - 41

Published: Oct. 26, 2023

Given the tremendous potential and influence of artificial intelligence (AI) algorithmic decision-making (DM), these systems have found wide-ranging applications across diverse fields, including education, business, healthcare industries, government, justice sectors. While AI DM offer significant benefits, they also carry risk unfavourable outcomes for users society. As a result, ensuring safety, reliability, trustworthiness becomes crucial. This article aims to provide comprehensive review synergy between DM, focussing on importance trustworthiness. The addresses following four key questions, guiding readers towards deeper understanding this topic: (i) why do we need trustworthy AI? (ii) what are requirements In line with second question, that establish been explained, explainability, accountability, robustness, fairness, acceptance AI, privacy, accuracy, reproducibility, human agency, oversight. (iii) how can data? (iv) priorities in terms challenging applications? Regarding last six different discussed, environmental science, 5G-based IoT networks, robotics architecture, engineering construction, financial technology, healthcare. emphasises address before their deployment order achieve goal good. An example is provided demonstrates be employed eliminate bias resources management systems. insights recommendations presented paper will serve as valuable guide researchers seeking applications.

Language: Английский

Citations

40

Personalising intravenous to oral antibiotic switch decision making through fair interpretable machine learning DOI Creative Commons
W. Bolton, Richard Wilson, Mark Gilchrist

et al.

Nature Communications, Journal Year: 2024, Volume and Issue: 15(1)

Published: Jan. 13, 2024

Abstract Antimicrobial resistance (AMR) and healthcare associated infections pose a significant threat globally. One key prevention strategy is to follow antimicrobial stewardship practices, in particular, maximise targeted oral therapy reduce the use of indwelling vascular devices for intravenous (IV) administration. Appreciating when an individual patient can switch from IV antibiotic treatment often non-trivial not standardised. To tackle this problem we created machine learning model predict could based on routinely collected clinical parameters. 10,362 unique intensive care unit stays were extracted two informative feature sets identified. Our best achieved mean AUROC 0.80 (SD 0.01) hold-out set while being biased individuals protected characteristics. Interpretability methodologies employed create clinically useful visual explanations. In summary, our provides individualised, fair, interpretable predictions IV-to-oral treatment. Prospectively evaluation safety efficacy needed before such technology be applied clinically.

Language: Английский

Citations

17

Strategies for Implementing Machine Learning Algorithms in the Clinical Practice of Radiology DOI
Allison Chae, Michael S. Yao, Hersh Sagreiya

et al.

Radiology, Journal Year: 2024, Volume and Issue: 310(1)

Published: Jan. 1, 2024

Despite recent advancements in machine learning (ML) applications health care, there have been few benefits and improvements to clinical medicine the hospital setting. To facilitate adaptation of methods ML, this review proposes a standardized framework for step-by-step implementation artificial intelligence into practice radiology that focuses on three key components: problem identification, stakeholder alignment, pipeline integration. A literature empirical evidence radiologic imaging justifies approach offers discussion structuring efforts help other practices leverage ML improve patient care. Clinical trial registration no. 04242667 © RSNA, 2024

Language: Английский

Citations

15

Explainable and Interpretable Machine Learning for Antimicrobial Stewardship: Opportunities and Challenges DOI
Daniele Roberto Giacobbe, Cristina Marelli, Sabrina Guastavino

et al.

Clinical Therapeutics, Journal Year: 2024, Volume and Issue: 46(6), P. 474 - 480

Published: March 21, 2024

Language: Английский

Citations

14

Recent advancements and applications of deep learning in heart failure: Α systematic review DOI
Georgios Petmezas, Vasileios E. Papageorgiou,

Vasileios Vassilikos

et al.

Computers in Biology and Medicine, Journal Year: 2024, Volume and Issue: 176, P. 108557 - 108557

Published: May 7, 2024

Language: Английский

Citations

13

Is this AI sexist? The effects of a biased AI’s anthropomorphic appearance and explainability on users’ bias perceptions and trust DOI

Tsung-Yu Hou,

Yu-Chia Tseng, Chien Wen Yuan

et al.

International Journal of Information Management, Journal Year: 2024, Volume and Issue: 76, P. 102775 - 102775

Published: March 16, 2024

Language: Английский

Citations

11