A quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies: QUADAS-AI DOI Open Access
Viknesh Sounderajah, Hutan Ashrafian, Sherri Rose

et al.

Nature Medicine, Journal Year: 2021, Volume and Issue: 27(10), P. 1663 - 1665

Published: Oct. 1, 2021

Language: Английский

AI in health and medicine DOI
Pranav Rajpurkar, Emma Chen,

Oishi Banerjee

et al.

Nature Medicine, Journal Year: 2022, Volume and Issue: 28(1), P. 31 - 38

Published: Jan. 1, 2022

Language: Английский

Citations

1483

Deep learning-enabled medical computer vision DOI Creative Commons
Andre Esteva, Katherine Chou, Serena Yeung

et al.

npj Digital Medicine, Journal Year: 2021, Volume and Issue: 4(1)

Published: Jan. 8, 2021

Abstract A decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields—including medicine—to benefit from insights that AI techniques can extract data. Here we survey recent development modern computer vision techniques—powered by deep learning—for medical applications, focusing on imaging, video, and clinical deployment. We start briefly summarizing a convolutional neural networks, including tasks they enable, context healthcare. Next, discuss several example imaging applications stand to benefit—including cardiology, pathology, dermatology, ophthalmology–and propose new avenues continued work. then expand into general highlighting ways which workflows integrate enhance care. Finally, challenges hurdles required real-world deployment these technologies.

Language: Английский

Citations

885

Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis DOI Creative Commons
Ravi Aggarwal, Viknesh Sounderajah, Guy Martin

et al.

npj Digital Medicine, Journal Year: 2021, Volume and Issue: 4(1)

Published: April 7, 2021

Deep learning (DL) has the potential to transform medical diagnostics. However, diagnostic accuracy of DL is uncertain. Our aim was evaluate algorithms identify pathology in imaging. Searches were conducted Medline and EMBASE up January 2020. We identified 11,921 studies, which 503 included systematic review. Eighty-two studies ophthalmology, 82 breast disease 115 respiratory for meta-analysis. Two hundred twenty-four other specialities qualitative Peer-reviewed that reported on using imaging included. Primary outcomes measures accuracy, study design reporting standards literature. Estimates pooled random-effects In AUC's ranged between 0.933 1 diagnosing diabetic retinopathy, age-related macular degeneration glaucoma retinal fundus photographs optical coherence tomography. imaging, 0.864 0.937 lung nodules or cancer chest X-ray CT scan. For 0.868 0.909 mammogram, ultrasound, MRI digital tomosynthesis. Heterogeneity high extensive variation methodology, terminology outcome noted. This can lead an overestimation There immediate need development artificial intelligence-specific EQUATOR guidelines, particularly STARD, order provide guidance around key issues this field.

Language: Английский

Citations

574

The imperative for regulatory oversight of large language models (or generative AI) in healthcare DOI Creative Commons
Bertalan Meskó, Eric J. Topol

npj Digital Medicine, Journal Year: 2023, Volume and Issue: 6(1)

Published: July 6, 2023

The rapid advancements in artificial intelligence (AI) have led to the development of sophisticated large language models (LLMs) such as GPT-4 and Bard. potential implementation LLMs healthcare settings has already garnered considerable attention because their diverse applications that include facilitating clinical documentation, obtaining insurance pre-authorization, summarizing research papers, or working a chatbot answer questions for patients about specific data concerns. While offering transformative potential, warrant very cautious approach since these are trained differently from AI-based medical technologies regulated already, especially within critical context caring patients. newest version, GPT-4, was released March, 2023, brings potentials this technology support multiple tasks; risks mishandling results it provides varying reliability new level. Besides being an advanced LLM, will be able read texts on images analyze those images. regulation generative AI medicine without damaging exciting is timely challenge ensure safety, maintain ethical standards, protect patient privacy. We argue regulatory oversight should assure professionals can use causing harm compromising This paper summarizes our practical recommendations what we expect regulators bring vision reality.

Language: Английский

Citations

541

Using machine learning approaches for multi-omics data analysis: A review DOI
Parminder Singh Reel, Smarti Reel, Ewan R. Pearson

et al.

Biotechnology Advances, Journal Year: 2021, Volume and Issue: 49, P. 107739 - 107739

Published: March 29, 2021

Language: Английский

Citations

527

Synthetic data in machine learning for medicine and healthcare DOI Open Access
Richard J. Chen, Ming Y. Lu, Tiffany Chen

et al.

Nature Biomedical Engineering, Journal Year: 2021, Volume and Issue: 5(6), P. 493 - 497

Published: June 15, 2021

Language: Английский

Citations

483

Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis DOI Creative Commons
Urs J. Muehlematter, Paola Daniore, Kerstin Noëlle Vokinger

et al.

The Lancet Digital Health, Journal Year: 2021, Volume and Issue: 3(3), P. e195 - e203

Published: Jan. 19, 2021

There has been a surge of interest in artificial intelligence and machine learning (AI/ML)-based medical devices. However, it is poorly understood how which AI/ML-based devices have approved the USA Europe. We searched governmental non-governmental databases to identify 222 240 The number increased substantially since 2015, with many being for use radiology. few were qualified as high-risk Of 124 commonly Europe, 80 first One possible reason approval Europe before might be potentially relatively less rigorous evaluation substantial highlight need ensure regulation these Currently, there no specific regulatory pathway or recommend more transparency on are regulated enable improve public trust, efficacy, safety, quality A comprehensive, publicly accessible database device details Conformité Européene (CE)-marked US Food Drug Administration needed.

Language: Английский

Citations

455

How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals DOI
Eric Q. Wu, Kevin Wu, Roxana Daneshjou

et al.

Nature Medicine, Journal Year: 2021, Volume and Issue: 27(4), P. 582 - 584

Published: April 1, 2021

Language: Английский

Citations

360

AI recognition of patient race in medical imaging: a modelling study DOI Creative Commons
Judy Wawira Gichoya, Imon Banerjee,

Ananth Reddy Bhimireddy

et al.

The Lancet Digital Health, Journal Year: 2022, Volume and Issue: 4(6), P. e406 - e414

Published: May 11, 2022

Background: In medical imaging, prior studies have demonstrated disparate AI performance by race, yet there is no known correlation for race on imaging that would be obvious to the human expert interpreting images. Methods: Using private and public datasets we evaluate: A) quantification of deep learning models detect from images, including ability these generalize external environments across multiple modalities, B) assessment possible confounding anatomic phenotype population features, such as disease distribution body habitus predictors C) investigation into underlying mechanism which can recognize race. Findings: Standard trained predict images with high modalities. Our findings hold under validation conditions, well when are optimized perform clinically motivated tasks. We demonstrate this detection not due trivial proxies or imaging-related surrogate covariates distribution. Finally, show persists over all anatomical regions frequency spectrum suggesting mitigation efforts will challenging demand further study. Interpretation: emphasize model self-reported itself issue importance. However, our trivially -- even corrupted, cropped, noised in a setting where clinical experts cannot, creates an enormous risk deployments imaging: if secretly used its knowledge misclassify Black patients, radiologists able tell using same data has access to.

Language: Английский

Citations

323

Artificial intelligence in radiology: 100 commercially available products and their scientific evidence DOI Creative Commons
Kicky G. van Leeuwen, Steven Schalekamp, Matthieu Rutten

et al.

European Radiology, Journal Year: 2021, Volume and Issue: 31(6), P. 3797 - 3804

Published: April 15, 2021

Abstract Objectives Map the current landscape of commercially available artificial intelligence (AI) software for radiology and review availability their scientific evidence. Methods We created an online overview CE-marked AI products clinical based on vendor-supplied product specifications ( www.aiforradiology.com ). Characteristics such as modality, subspeciality, main task, regulatory information, deployment, pricing model were retrieved. conducted extensive literature search evidence these products. Articles classified according to a hierarchical efficacy. Results The included 100 from 54 different vendors. For 64/100 products, there was no peer-reviewed its observed large heterogeneity in deployment methods, models, classes. remaining 36/100 comprised 237 papers that predominantly (65%) focused diagnostic accuracy (efficacy level 2). From 18 had regarded 3 or higher, validating (potential) impact thinking, patient outcome, costs. Half (116/237) independent not (co-)funded (co-)authored by vendor. Conclusions Even though commercial supply already holds we conclude sector is still infancy. efficacy lacking. Only 18/100 have demonstrated impact. Key Points • Artificial infancy even are available. 36 out which most studies demonstrate lower levels There wide variety strategies, CE marking class radiology.

Language: Английский

Citations

311