The clinician-AI interface: intended use and explainability in FDA-cleared AI devices for medical image interpretation DOI Creative Commons
Stephanie L. McNamara, Paul H. Yi,

William Lotter

et al.

npj Digital Medicine, Journal Year: 2024, Volume and Issue: 7(1)

Published: March 26, 2024

Abstract As applications of AI in medicine continue to expand, there is an increasing focus on integration into clinical practice. An underappreciated aspect this translation where the fits workflow, and turn, outputs generated by facilitate clinician interaction workflow. For instance, canonical use case for medical image interpretation, could prioritize cases before review or even autonomously interpret images without review. A related explainability – does generate help explain its predictions clinicians? While many workflows techniques have been proposed, a summative assessment current scope practice lacking. Here, we evaluate state FDA-cleared devices interpretation assistance terms intended use, generated, types offered. We create curated database focused these aspects clinician-AI interface, find high frequency “triage” devices, notable variability output characteristics across products, often limited predictions. Altogether, aim increase transparency landscape interface highlight need rigorously assess which strategies ultimately lead best outcomes.

Language: Английский

Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology DOI Creative Commons
Jeremy Petch, Shuang Di, Walter Nelson

et al.

Canadian Journal of Cardiology, Journal Year: 2021, Volume and Issue: 38(2), P. 204 - 213

Published: Sept. 14, 2021

Many clinicians remain wary of machine learning because longstanding concerns about “black box” models. “Black is shorthand for models that are sufficiently complex they not straightforwardly interpretable to humans. Lack interpretability in predictive can undermine trust those models, especially health care, which so many decisions are— literally—life and death issues. There has been a recent explosion research the field explainable aimed at addressing these concerns. The promise considerable, but it important cardiologists who may encounter techniques clinical decision-support tools or novel papers have critical understanding both their strengths limitations. This paper reviews key concepts as apply cardiology. Key reviewed include vs explainability global local explanations. Techniques demonstrated permutation importance, surrogate decision trees, model-agnostic explanations, partial dependence plots. We discuss several limitations with techniques, focusing on how nature explanations approximations omit information black-box work why make certain predictions. conclude by proposing rule thumb when appropriate use black- box rather than

Language: Английский

Citations

374

Transparency of deep neural networks for medical image analysis: A review of interpretability methods DOI Creative Commons
Zohaib Salahuddin, Henry C. Woodruff, Avishek Chatterjee

et al.

Computers in Biology and Medicine, Journal Year: 2021, Volume and Issue: 140, P. 105111 - 105111

Published: Dec. 4, 2021

Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown the same or better performance than clinicians many tasks owing to rapid increase available data computational power. In order conform principles of trustworthy AI, it is essential that AI system be transparent, robust, fair, ensure accountability. Current deep solutions are referred black-boxes due lack understanding specifics concerning decision-making process. Therefore, there need interpretability before they can incorporated into routine workflow. this narrative review, we utilized systematic keyword searches domain expertise identify nine different types methods been used learning models medical image analysis based on type generated explanations technical similarities. Furthermore, report progress made towards evaluating produced by various methods. Finally, discuss limitations, provide guidelines using future directions imaging analysis.

Language: Английский

Citations

285

Bias in artificial intelligence algorithms and recommendations for mitigation DOI Creative Commons
Lama Nazer, Razan Zatarah,

Shai Waldrip

et al.

PLOS Digital Health, Journal Year: 2023, Volume and Issue: 2(6), P. e0000278 - e0000278

Published: June 22, 2023

The adoption of artificial intelligence (AI) algorithms is rapidly increasing in healthcare. Such may be shaped by various factors such as social determinants health that can influence outcomes. While AI have been proposed a tool to expand the reach quality healthcare underserved communities and improve equity, recent literature has raised concerns about propagation biases disparities through implementation these algorithms. Thus, it critical understand sources bias inherent AI-based This review aims highlight potential within each step developing healthcare, starting from framing problem, data collection, preprocessing, development, validation, well their full implementation. For steps, we also discuss strategies mitigate disparities. A checklist was developed with recommendations for reducing during development stages. It important developers users keep considerations mind advance equity all populations.

Language: Английский

Citations

243

Ethical and regulatory challenges of AI technologies in healthcare: A narrative review DOI Creative Commons
Ciro Mennella, Umberto Maniscalco, Giuseppe De Pietro

et al.

Heliyon, Journal Year: 2024, Volume and Issue: 10(4), P. e26297 - e26297

Published: Feb. 1, 2024

Over the past decade, there has been a notable surge in AI-driven research, specifically geared toward enhancing crucial clinical processes and outcomes. The potential of AI-powered decision support systems to streamline workflows, assist diagnostics, enable personalized treatment is increasingly evident. Nevertheless, introduction these cutting-edge solutions poses substantial challenges care environments, necessitating thorough exploration ethical, legal, regulatory considerations. A robust governance framework imperative foster acceptance successful implementation AI healthcare. This article delves deep into critical ethical concerns entangled with deployment practice. It not only provides comprehensive overview role technologies but also offers an insightful perspective on challenges, making pioneering contribution field. research aims address current digital healthcare by presenting valuable recommendations for all stakeholders eager advance development innovative systems.

Language: Английский

Citations

183

From promise to practice: towards the realisation of AI-informed mental health care DOI Creative Commons
Nikolaos Koutsouleris, Tobias U. Hauser, Vasilisa Skvortsova

et al.

The Lancet Digital Health, Journal Year: 2022, Volume and Issue: 4(11), P. e829 - e840

Published: Oct. 10, 2022

In this Series paper, we explore the promises and challenges of artificial intelligence (AI)-based precision medicine tools in mental health care from clinical, ethical, regulatory perspectives. The real-world implementation these is increasingly considered prime solution for key issues health, such as delayed, inaccurate, inefficient delivery. Similarly, machine-learning-based empirical strategies are becoming commonplace psychiatric research because their potential to adequately deconstruct biopsychosocial complexity disorders, hence improve nosology prognostic preventive paradigms. However, steps needed translate into practice currently hampered by multiple interacting challenges. These obstructions range current technology-distant state clinical practice, over lack valid databases required feed data-intensive AI algorithms, model development validation considerations being disconnected core principles utility ethical acceptability. provide recommendations on how could be addressed an interdisciplinary perspective pave way towards a framework care, leveraging combined strengths human AI.

Language: Английский

Citations

133

To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems DOI Creative Commons
Julia Amann, Dennis Vetter, Stig Nikolaj Fasmer Blomberg

et al.

PLOS Digital Health, Journal Year: 2022, Volume and Issue: 1(2), P. e0000016 - e0000016

Published: Feb. 17, 2022

Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents review of the key arguments favor and against explainability AI-powered Clinical Decision Support System (CDSS) applied to concrete use case, namely an CDSS currently used emergency call setting identify patients with life-threatening cardiac arrest. More specifically, we performed normative analysis using socio-technical scenarios provide nuanced account role CDSSs allowing abstractions more general level. focused on three layers: technical considerations, human factors, designated system decision-making. findings suggest that whether can added value depends several questions: feasibility, level validation case explainable algorithms, characteristics context which implemented, decision-making process, user group(s). Thus, each will require individualized assessment needs example how such could look like practice.

Language: Английский

Citations

129

A manifesto on explainability for artificial intelligence in medicine DOI Creative Commons
Carlo Combi,

Beatrice Amico,

Riccardo Bellazzi

et al.

Artificial Intelligence in Medicine, Journal Year: 2022, Volume and Issue: 133, P. 102423 - 102423

Published: Oct. 9, 2022

The rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, output users. This is especially legitimate biomedical contexts, where patient safety paramount importance. position paper brings together seven researchers working the field with different roles perspectives, explore depth concept explainable AI, XAI, offering functional definition conceptual framework model that can be used when considering XAI. followed by series desiderata for attaining explainability each which touches upon key domain biomedicine.

Language: Английский

Citations

95

Generative AI in Health Care and Liability Risks for Physicians and Safety Concerns for Patients DOI Open Access
Mindy Duffourc, Sara Gerke

JAMA, Journal Year: 2023, Volume and Issue: 330(4), P. 313 - 313

Published: July 6, 2023

This Viewpoint discusses the potential use of generative artificial intelligence (AI) in medical care and liability risks for physicians using technology, as well offers suggestions safeguards to protect patients.

Language: Английский

Citations

67

The benefits and pitfalls of machine learning for biomarker discovery DOI Creative Commons
Sandra Ng,

Sara Masarone,

David Watson

et al.

Cell and Tissue Research, Journal Year: 2023, Volume and Issue: 394(1), P. 17 - 31

Published: July 27, 2023

Prospects for the discovery of robust and reproducible biomarkers have improved considerably with development sensitive omics platforms that can enable measurement biological molecules at an unprecedented scale. With technical barriers to success lowering, challenge is now moving into analytical domain. Genome-wide presents a problem scale multiple testing as standard statistical methods struggle distinguish signal from noise in increasingly complex systems. Machine learning AI are good finding answers large datasets, but they tendency overfit solutions. It may be possible find local answer or mechanism specific patient sample small group samples, this not generalise wider populations due high likelihood false discovery. The rise explainable offers improve opportunity true by providing explanations predictions explored mechanistically before proceeding costly time-consuming validation studies. This review aims introduce some basic concepts machine biomarker focus on post hoc explanation predictions. To illustrate this, we consider how has already been used successfully, explore case study applies rheumatoid arthritis, demonstrating accessibility tools learning. We use discuss potential challenges solutions critically interrogate disease response mechanisms.

Language: Английский

Citations

51

Human-AI Ensembles: When Can They Work? DOI Creative Commons
Vivek Choudhary, Arianna Marchetti, Yash Raj Shrestha

et al.

Journal of Management, Journal Year: 2023, Volume and Issue: unknown

Published: Oct. 3, 2023

An “ensemble” approach to decision-making involves aggregating the results from different decision makers solving same problem (i.e., a division of labor without specialization). We draw on literatures machine learning-based Artificial Intelligence (AI) as well human propose conditions under which human-AI ensembles can be useful. argue that and AI-based algorithmic usefully ensembled even when neither has clear advantage over other in terms predictive accuracy, if alone attain satisfactory accuracy absolute terms. Many managerial decisions have these attributes, collaboration between humans AI is usually ruled out such contexts because for specialization are not met. However, we through ensembling still possibility identify.

Language: Английский

Citations

47