Interpretable machine learning for dementia: A systematic review DOI
Sophie Martin, Florence J. Townend, Frederik Barkhof

et al.

Alzheimer s & Dementia, Journal Year: 2023, Volume and Issue: 19(5), P. 2135 - 2149

Published: Feb. 3, 2023

Abstract Introduction Machine learning research into automated dementia diagnosis is becoming increasingly popular but so far has had limited clinical impact. A key challenge building robust and generalizable models that generate decisions can be reliably explained. Some are designed to inherently “interpretable,” whereas post hoc “explainability” methods used for other models. Methods Here we sought summarize the state‐of‐the‐art of interpretable machine dementia. Results We identified 92 studies using PubMed, Web Science, Scopus. Studies demonstrate promising classification performance vary in their validation procedures reporting standards rely heavily on data sets. Discussion Future work should incorporate clinicians validate explanation make conclusive inferences about dementia‐related disease pathology. Critically analyzing model explanations also requires an understanding interpretability itself. Patient‐specific required benefit practice.

Language: Английский

Surgical data science – from concepts toward clinical translation DOI Creative Commons
Lena Maier‐Hein, Matthias Eisenmann, Duygu Sarıkaya

et al.

Medical Image Analysis, Journal Year: 2021, Volume and Issue: 76, P. 102306 - 102306

Published: Nov. 18, 2021

Recent developments in data science general and machine learning particular have transformed the way experts envision future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve quality interventional healthcare through capture, organization, analysis modeling data. While an increasing number data-driven approaches clinical applications been studied fields radiological science, translational success stories are still lacking In this publication, we shed light on underlying reasons provide roadmap for advances field. Based international workshop involving leading researchers SDS, review current practice, key achievements initiatives as well available standards tools topics relevant field, namely (1) infrastructure acquisition, storage access presence regulatory constraints, (2) annotation sharing (3) analytics. We further complement technical perspective with (4) currently SDS products progress from academia (5) faster translation exploitation full potential based multi-round Delphi process.

Language: Английский

Citations

226

Artificial intelligence in liver diseases: Improving diagnostics, prognostics and response prediction DOI Creative Commons
David Nam, Julius Chapiro, Valérie Paradis

et al.

JHEP Reports, Journal Year: 2022, Volume and Issue: 4(4), P. 100443 - 100443

Published: Feb. 2, 2022

Clinical routine in hepatology involves the diagnosis and treatment of a wide spectrum metabolic, infectious, autoimmune neoplastic diseases. Clinicians integrate qualitative quantitative information from multiple data sources to make diagnosis, prognosticate disease course, recommend treatment. In last 5 years, advances artificial intelligence (AI), particularly deep learning, have made it possible extract clinically relevant complex diverse clinical datasets. particular, histopathology radiology image contain diagnostic, prognostic predictive which AI can extract. Ultimately, such systems could be implemented as decision support tools. However, context hepatology, this requires further large-scale validation regulatory approval. Herein, we summarise state art with particular focus on data. We present roadmap for development novel biomarkers outline critical obstacles need overcome.

Language: Английский

Citations

140

To buy or not to buy—evaluating commercial AI solutions in radiology (the ECLAIR guidelines) DOI Creative Commons
Patrick Omoumi, Alexis Ducarouge,

Antoine Tournier

et al.

European Radiology, Journal Year: 2021, Volume and Issue: 31(6), P. 3786 - 3796

Published: March 5, 2021

Abstract Artificial intelligence (AI) has made impressive progress over the past few years, including many applications in medical imaging. Numerous commercial solutions based on AI techniques are now available for sale, forcing radiology practices to learn how properly assess these tools. While several guidelines describing good conducting and reporting AI-based research medicine have been published, fewer efforts focused recommendations addressing key questions consider when critically assessing before purchase. Commercial typically complicated software products, evaluation of which factors be considered. In this work, authors from academia industry joined propose a practical framework that will help stakeholders evaluate (the ECLAIR guidelines) reach an informed decision. Topics include relevance solution point view each stakeholder, issues regarding performance validation, usability integration, regulatory legal aspects, financial support services. Key Points • artificial We focusing points imaging, allowing all conduct relevant discussions with manufacturers decision as whether purchase imaging applications.

Language: Английский

Citations

130

BS-Net: Learning COVID-19 pneumonia severity on a large chest X-ray dataset DOI Open Access
Alberto Signoroni, Mattia Savardi, Sergio Benini

et al.

Medical Image Analysis, Journal Year: 2021, Volume and Issue: 71, P. 102046 - 102046

Published: March 31, 2021

Language: Английский

Citations

124

Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks DOI Creative Commons
Sajid Nazir, Diane M. Dickson, Muhammad Usman Akram

et al.

Computers in Biology and Medicine, Journal Year: 2023, Volume and Issue: 156, P. 106668 - 106668

Published: Feb. 20, 2023

Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite results, widespread adoption these in clinical practice is still taking place at a moderate pace. One major hindrance that trained Deep Neural Networks (DNN) model provides prediction, but questions about why and how prediction was made remain unanswered. This linkage utmost importance for regulated healthcare domain to increase trust automated system by practitioners, patients other stakeholders. The application medical imaging has be interpreted caution due health safety concerns similar blame attribution case an accident involving autonomous cars. consequences both false positive negative cases are far reaching patients' welfare cannot ignored. exacerbated fact state-of-the-art algorithms comprise complex interconnected structures, millions parameters, 'black box' nature, offering little understanding inner working unlike traditional machine algorithms. Explainable AI (XAI) help understand predictions which develop system, accelerate diagnosis, meet adherence regulatory requirements. survey comprehensive review promising field XAI biomedical diagnostics. We also provide categorization techniques, discuss open challenges, future directions would interest clinicians, regulators developers.

Language: Английский

Citations

124

Benchmarking saliency methods for chest X-ray interpretation DOI Creative Commons
Adriel Saporta, Xiaotong Gui, Ashwin Agrawal

et al.

Nature Machine Intelligence, Journal Year: 2022, Volume and Issue: 4(10), P. 867 - 878

Published: Oct. 10, 2022

Abstract Saliency methods, which produce heat maps that highlight the areas of medical image influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. However, rigorous investigation accuracy and reliability these strategies is necessary before they integrated into clinical setting. In this work, we quantitatively evaluate seven saliency including Grad-CAM, across multiple neural network architectures using two evaluation metrics. We establish first human benchmark for chest X-ray segmentation a multilabel classification set-up, examine under what conditions might be more prone failure localizing important pathologies compared with expert benchmark. find (1) while Grad-CAM generally localized better than other evaluated all performed significantly worse benchmark, (2) gap localization performance between was largest were smaller size had shapes complex, (3) confidence positively correlated performance. Our work demonstrates several limitations methods must addressed can rely on them deep learning explainability imaging.

Language: Английский

Citations

108

A manifesto on explainability for artificial intelligence in medicine DOI Creative Commons
Carlo Combi,

Beatrice Amico,

Riccardo Bellazzi

et al.

Artificial Intelligence in Medicine, Journal Year: 2022, Volume and Issue: 133, P. 102423 - 102423

Published: Oct. 9, 2022

The rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, output users. This is especially legitimate biomedical contexts, where patient safety paramount importance. position paper brings together seven researchers working the field with different roles perspectives, explore depth concept explainable AI, XAI, offering functional definition conceptual framework model that can be used when considering XAI. followed by series desiderata for attaining explainability each which touches upon key domain biomedicine.

Language: Английский

Citations

95

Computer vision in surgery: from potential to clinical value DOI Creative Commons
Pietro Mascagni, Deepak Alapatt, Luca Sestini

et al.

npj Digital Medicine, Journal Year: 2022, Volume and Issue: 5(1)

Published: Oct. 28, 2022

Abstract Hundreds of millions operations are performed worldwide each year, and the rising uptake in minimally invasive surgery has enabled fiber optic cameras robots to become both important tools conduct sensors from which capture information about surgery. Computer vision (CV), application algorithms analyze interpret visual data, a critical technology through study intraoperative phase care with goals augmenting surgeons’ decision-making processes, supporting safer surgery, expanding access surgical care. While much work been on potential use cases, there currently no CV widely used for diagnostic or therapeutic applications Using laparoscopic cholecystectomy as an example, we reviewed current techniques that have applied their clinical applications. Finally, discuss challenges obstacles remain be overcome broader implementation adoption

Language: Английский

Citations

94

A survey on the interpretability of deep learning in medical diagnosis DOI Open Access

Qiaoying Teng,

Zhe Liu, Yuqing Song

et al.

Multimedia Systems, Journal Year: 2022, Volume and Issue: 28(6), P. 2335 - 2355

Published: June 25, 2022

Language: Английский

Citations

82

Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations DOI
Pouyan Esmaeilzadeh

Artificial Intelligence in Medicine, Journal Year: 2024, Volume and Issue: 151, P. 102861 - 102861

Published: March 30, 2024

Language: Английский

Citations

75