Interpretable machine learning for dementia: A systematic review DOI
Sophie Martin, Florence J. Townend, Frederik Barkhof

et al.

Alzheimer s & Dementia, Journal Year: 2023, Volume and Issue: 19(5), P. 2135 - 2149

Published: Feb. 3, 2023

Abstract Introduction Machine learning research into automated dementia diagnosis is becoming increasingly popular but so far has had limited clinical impact. A key challenge building robust and generalizable models that generate decisions can be reliably explained. Some are designed to inherently “interpretable,” whereas post hoc “explainability” methods used for other models. Methods Here we sought summarize the state‐of‐the‐art of interpretable machine dementia. Results We identified 92 studies using PubMed, Web Science, Scopus. Studies demonstrate promising classification performance vary in their validation procedures reporting standards rely heavily on data sets. Discussion Future work should incorporate clinicians validate explanation make conclusive inferences about dementia‐related disease pathology. Critically analyzing model explanations also requires an understanding interpretability itself. Patient‐specific required benefit practice.

Language: Английский

AI in health and medicine DOI
Pranav Rajpurkar, Emma Chen,

Oishi Banerjee

et al.

Nature Medicine, Journal Year: 2022, Volume and Issue: 28(1), P. 31 - 38

Published: Jan. 1, 2022

Language: Английский

Citations

1457

The false hope of current approaches to explainable artificial intelligence in health care DOI Creative Commons
Marzyeh Ghassemi,

Luke Oakden‐Rayner,

Andrew L. Beam

et al.

The Lancet Digital Health, Journal Year: 2021, Volume and Issue: 3(11), P. e745 - e750

Published: Oct. 25, 2021

The black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable used in high-stakes scenarios such as medicine. It been argued that will engender trust with the health-care workforce, provide transparency into decision making process, and potentially mitigate various kinds bias. In this Viewpoint, we argue argument represents a false hope for explainability methods are unlikely achieve these goals patient-level support. We an overview techniques highlight how failure cases can cause problems individual patients. absence suitable methods, advocate rigorous internal external validation models more direct means achieving often associated explainability, caution against having requirement clinically deployed models.

Language: Английский

Citations

740

ChatGPT and Other Large Language Models Are Double-edged Swords DOI
Yiqiu Shen, Laura Heacock, Jonathan Elias

et al.

Radiology, Journal Year: 2023, Volume and Issue: 307(2)

Published: Jan. 26, 2023

Language: Английский

Citations

729

Explainable artificial intelligence (XAI) in deep learning-based medical image analysis DOI Creative Commons
Bas H. M. van der Velden, Hugo J. Kuijf, Kenneth G. A. Gilhuijs

et al.

Medical Image Analysis, Journal Year: 2022, Volume and Issue: 79, P. 102470 - 102470

Published: May 4, 2022

With an increase in deep learning-based methods, the call for explainability of such methods grows, especially high-stakes decision making areas as medical image analysis. This survey presents overview eXplainable Artificial Intelligence (XAI) used A framework XAI criteria is introduced to classify analysis methods. Papers on techniques are then surveyed and categorized according anatomical location. The paper concludes with outlook future opportunities

Language: Английский

Citations

656

Transformers in medical imaging: A survey DOI
Fahad Shamshad, Salman Khan, Syed Waqas Zamir

et al.

Medical Image Analysis, Journal Year: 2023, Volume and Issue: 88, P. 102802 - 102802

Published: April 5, 2023

Language: Английский

Citations

591

Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology DOI Creative Commons
Jeremy Petch, Shuang Di, Walter Nelson

et al.

Canadian Journal of Cardiology, Journal Year: 2021, Volume and Issue: 38(2), P. 204 - 213

Published: Sept. 14, 2021

Many clinicians remain wary of machine learning because longstanding concerns about “black box” models. “Black is shorthand for models that are sufficiently complex they not straightforwardly interpretable to humans. Lack interpretability in predictive can undermine trust those models, especially health care, which so many decisions are— literally—life and death issues. There has been a recent explosion research the field explainable aimed at addressing these concerns. The promise considerable, but it important cardiologists who may encounter techniques clinical decision-support tools or novel papers have critical understanding both their strengths limitations. This paper reviews key concepts as apply cardiology. Key reviewed include vs explainability global local explanations. Techniques demonstrated permutation importance, surrogate decision trees, model-agnostic explanations, partial dependence plots. We discuss several limitations with techniques, focusing on how nature explanations approximations omit information black-box work why make certain predictions. conclude by proposing rule thumb when appropriate use black- box rather than

Language: Английский

Citations

367

Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities DOI Creative Commons
Waddah Saeed, Christian W. Omlin

Knowledge-Based Systems, Journal Year: 2023, Volume and Issue: 263, P. 110273 - 110273

Published: Jan. 11, 2023

learning Deep Meta-survey Responsible AI a b s t r c tThe past decade has seen significant progress in artificial intelligence (AI), which resulted algorithms being adopted for resolving variety of problems.However, this success been met by increasing model complexity and employing black-box models that lack transparency.In response to need, Explainable (XAI) proposed make more transparent thus advance the adoption critical domains.Although there are several reviews XAI topics literature have identified challenges potential research directions XAI, these scattered.This study, hence, presents systematic meta-survey future organized two themes: (1) general (2) based on machine life cycle's phases: design, development, deployment.We believe our contributes providing guide exploration area.

Language: Английский

Citations

365

Artificial intelligence for multimodal data integration in oncology DOI Creative Commons
Jana Lipková, Richard J. Chen, Bowen Chen

et al.

Cancer Cell, Journal Year: 2022, Volume and Issue: 40(10), P. 1095 - 1110

Published: Oct. 1, 2022

In oncology, the patient state is characterized by a whole spectrum of modalities, ranging from radiology, histology, and genomics to electronic health records. Current artificial intelligence (AI) models operate mainly in realm single modality, neglecting broader clinical context, which inevitably diminishes their potential. Integration different data modalities provides opportunities increase robustness accuracy diagnostic prognostic models, bringing AI closer practice. are also capable discovering novel patterns within across suitable for explaining differences outcomes or treatment resistance. The insights gleaned such can guide exploration studies contribute discovery biomarkers therapeutic targets. To support these advances, here we present synopsis methods strategies multimodal fusion association discovery. We outline approaches interpretability directions AI-driven through interconnections. examine challenges adoption discuss emerging solutions.

Language: Английский

Citations

322

Artificial intelligence and machine learning for medical imaging: A technology review DOI Open Access
Ana María Barragán Montero, Umair Javaid, Gilmer Valdés

et al.

Physica Medica, Journal Year: 2021, Volume and Issue: 83, P. 242 - 256

Published: March 1, 2021

Language: Английский

Citations

270

U-Net-Based Medical Image Segmentation DOI Creative Commons

Xiaoxia Yin,

Le Sun,

Yuhan Fu

et al.

Journal of Healthcare Engineering, Journal Year: 2022, Volume and Issue: 2022, P. 1 - 16

Published: April 15, 2022

Deep learning has been extensively applied to segmentation in medical imaging. U-Net proposed 2015 shows the advantages of accurate small targets and its scalable network architecture. With increasing requirements for performance imaging recent years, cited academically more than 2500 times. Many scholars have constantly developing This paper summarizes image technologies based on structure variants concerning their structure, innovation, efficiency, etc.; reviews categorizes related methodology; introduces loss functions, evaluation parameters, modules commonly imaging, which will provide a good reference future research.

Language: Английский

Citations

238