Global healthcare fairness: We should be sharing more, not less, data DOI Creative Commons
Kenneth P. Seastedt, Patrick Schwab,

Zach O’Brien

et al.

PLOS Digital Health, Journal Year: 2022, Volume and Issue: 1(10), P. e0000102 - e0000102

Published: Oct. 6, 2022

The availability of large, deidentified health datasets has enabled significant innovation in using machine learning (ML) to better understand patients and their diseases. However, questions remain regarding the true privacy this data, patient control over how we regulate data sharing a way that does not encumber progress or further potentiate biases for underrepresented populations. After reviewing literature on potential reidentifications publicly available datasets, argue cost—measured terms access future medical innovations clinical software—of slowing ML is too great limit through large databases concerns imperfect anonymization. This cost especially developing countries where barriers preventing inclusion such will continue rise, excluding these populations increasing existing favor high-income countries. Preventing artificial intelligence’s towards precision medicine sliding back practice dogma may pose larger threat than reidentification within datasets. While risk should be minimized, believe never zero, society determine an acceptable threshold below which can occur—for benefit global knowledge system.

Language: Английский

Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers DOI Creative Commons
Catherine A. Gao, Frederick M. Howard, Nikolay S. Markov

et al.

npj Digital Medicine, Journal Year: 2023, Volume and Issue: 6(1)

Published: April 26, 2023

Abstract Large language models such as ChatGPT can produce increasingly realistic text, with unknown information on the accuracy and integrity of using these in scientific writing. We gathered fifth research abstracts from five high-impact factor medical journals asked to generate based their titles journals. Most generated were detected an AI output detector, ‘GPT-2 Output Detector’, % ‘fake’ scores (higher meaning more likely be generated) median [interquartile range] 99.98% [12.73%, 99.98%] compared 0.02% [IQR 0.02%, 0.09%] for original abstracts. The AUROC detector was 0.94. Generated scored lower than when run through a plagiarism website iThenticate matching text found). When given mixture general abstracts, blinded human reviewers correctly identified 68% being by ChatGPT, but incorrectly 14% generated. Reviewers indicated that it surprisingly difficult differentiate between two, though they suspected vaguer formulaic. writes believable completely data. Depending publisher-specific guidelines, detectors may serve editorial tool help maintain standards. boundaries ethical acceptable use large writing are still discussed, different conferences adopting varying policies.

Language: Английский

Citations

423

Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers DOI Creative Commons
Catherine A. Gao, Frederick M. Howard, Nikolay S. Markov

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2022, Volume and Issue: unknown

Published: Dec. 27, 2022

Abstract Background Large language models such as ChatGPT can produce increasingly realistic text, with unknown information on the accuracy and integrity of using these in scientific writing. Methods We gathered ten research abstracts from five high impact factor medical journals (n=50) asked to generate based their titles journals. evaluated an artificial intelligence (AI) output detector, plagiarism had blinded human reviewers try distinguish whether were original or generated. Results All ChatGPT-generated written clearly but only 8% correctly followed specific journal’s formatting requirements. Most generated detected AI scores (higher meaning more likely be generated) median [interquartile range] 99.98% [12.73, 99.98] compared very low probability AI-generated 0.02% [0.02, 0.09]. The AUROC detector was 0.94. Generated scored originality (100% [100, 100] originality). a similar patient cohort size abstracts, though exact numbers fabricated. When given mixture general identified 68% being by ChatGPT, incorrectly 14% Reviewers indicated that it surprisingly difficult differentiate between two, vaguer formulaic feel Conclusion writes believable completely data. These are without any often identifiable skeptical reviewers. evaluation for conferences must adapt policy practice maintain rigorous standards; we suggest inclusion detectors editorial process clear disclosure if technologies used. boundaries ethical acceptable use large help writing remain determined.

Language: Английский

Citations

338

Artificial intelligence for multimodal data integration in oncology DOI Creative Commons
Jana Lipková, Richard J. Chen, Bowen Chen

et al.

Cancer Cell, Journal Year: 2022, Volume and Issue: 40(10), P. 1095 - 1110

Published: Oct. 1, 2022

In oncology, the patient state is characterized by a whole spectrum of modalities, ranging from radiology, histology, and genomics to electronic health records. Current artificial intelligence (AI) models operate mainly in realm single modality, neglecting broader clinical context, which inevitably diminishes their potential. Integration different data modalities provides opportunities increase robustness accuracy diagnostic prognostic models, bringing AI closer practice. are also capable discovering novel patterns within across suitable for explaining differences outcomes or treatment resistance. The insights gleaned such can guide exploration studies contribute discovery biomarkers therapeutic targets. To support these advances, here we present synopsis methods strategies multimodal fusion association discovery. We outline approaches interpretability directions AI-driven through interconnections. examine challenges adoption discuss emerging solutions.

Language: Английский

Citations

334

Artificial intelligence in histopathology: enhancing cancer research and clinical oncology DOI
Artem Shmatko, Narmin Ghaffari Laleh, Moritz Gerstung

et al.

Nature Cancer, Journal Year: 2022, Volume and Issue: 3(9), P. 1026 - 1038

Published: Sept. 22, 2022

Language: Английский

Citations

251

Towards a general-purpose foundation model for computational pathology DOI
Richard J. Chen, Tong Ding, Ming Y. Lu

et al.

Nature Medicine, Journal Year: 2024, Volume and Issue: 30(3), P. 850 - 862

Published: March 1, 2024

Language: Английский

Citations

216

Algorithmic fairness in artificial intelligence for medicine and healthcare DOI
Richard J. Chen, Judy J. Wang, Drew F. K. Williamson

et al.

Nature Biomedical Engineering, Journal Year: 2023, Volume and Issue: 7(6), P. 719 - 742

Published: June 28, 2023

Language: Английский

Citations

203

Swarm learning for decentralized artificial intelligence in cancer histopathology DOI Creative Commons
Oliver Lester Saldanha, Philip Quirke, Nicholas P. West

et al.

Nature Medicine, Journal Year: 2022, Volume and Issue: 28(6), P. 1232 - 1239

Published: April 25, 2022

Abstract Artificial intelligence (AI) can predict the presence of molecular alterations directly from routine histopathology slides. However, training robust AI systems requires large datasets for which data collection faces practical, ethical and legal obstacles. These obstacles could be overcome with swarm learning (SL), in partners jointly train models while avoiding transfer monopolistic governance. Here, we demonstrate successful use SL large, multicentric gigapixel images over 5,000 patients. We show that trained using BRAF mutational status microsatellite instability hematoxylin eosin (H&E)-stained pathology slides colorectal cancer. on three patient cohorts Northern Ireland, Germany United States, validated prediction performance two independent Kingdom. Our SL-trained outperform most locally models, perform par are merged datasets. In addition, SL-based efficient. future, used to distributed any image analysis task, eliminating need transfer.

Language: Английский

Citations

149

Artificial intelligence in liver diseases: Improving diagnostics, prognostics and response prediction DOI Creative Commons
David Nam, Julius Chapiro, Valérie Paradis

et al.

JHEP Reports, Journal Year: 2022, Volume and Issue: 4(4), P. 100443 - 100443

Published: Feb. 2, 2022

Clinical routine in hepatology involves the diagnosis and treatment of a wide spectrum metabolic, infectious, autoimmune neoplastic diseases. Clinicians integrate qualitative quantitative information from multiple data sources to make diagnosis, prognosticate disease course, recommend treatment. In last 5 years, advances artificial intelligence (AI), particularly deep learning, have made it possible extract clinically relevant complex diverse clinical datasets. particular, histopathology radiology image contain diagnostic, prognostic predictive which AI can extract. Ultimately, such systems could be implemented as decision support tools. However, context hepatology, this requires further large-scale validation regulatory approval. Herein, we summarise state art with particular focus on data. We present roadmap for development novel biomarkers outline critical obstacles need overcome.

Language: Английский

Citations

140

Addressing fairness in artificial intelligence for medical imaging DOI Creative Commons
María Agustina Ricci Lara, Rodrigo Echeveste, Enzo Ferrante

et al.

Nature Communications, Journal Year: 2022, Volume and Issue: 13(1)

Published: Aug. 6, 2022

A plethora of work has shown that AI systems can systematically and unfairly be biased against certain populations in multiple scenarios. The field medical imaging, where are beginning to increasingly adopted, is no exception. Here we discuss the meaning fairness this area comment on potential sources biases, as well strategies available mitigate them. Finally, analyze current state field, identifying strengths highlighting areas vacancy, challenges opportunities lie ahead.

Language: Английский

Citations

129

Evaluation and Mitigation of Racial Bias in Clinical Machine Learning Models: Scoping Review DOI Creative Commons
Jonathan Huang, Galal Galal, Mozziyar Etemadi

et al.

JMIR Medical Informatics, Journal Year: 2022, Volume and Issue: 10(5), P. e36388 - e36388

Published: March 27, 2022

Background Racial bias is a key concern regarding the development, validation, and implementation of machine learning (ML) models in clinical settings. Despite potential to propagate health disparities, racial ML has yet be thoroughly examined best practices for mitigation remain unclear. Objective Our objective was perform scoping review characterize methods by which been assessed describe strategies that may used enhance algorithmic fairness ML. Methods A conducted accordance with Preferred Reporting Items Systematic Reviews Meta-analyses (PRISMA) Extension Scoping Reviews. literature search using PubMed, Scopus, Embase databases, as well Google Scholar, identified 635 records, 12 studies were included. Results Applications varied involved diagnosis, outcome prediction, score prediction performed on data sets including images, diagnostic studies, text, variables. Of 1 (8%) described model routine use, 2 (17%) prospectively validated models, remaining 9 (75%) internally models. In addition, 8 (67%) concluded present, it not, without comparison baseline model. Fairness metrics assess inconsistent. The most commonly observed equal opportunity difference (5/12, 42%), accuracy (4/12, 25%), disparate impact (2/12, 17%). All implemented successfully increased fairness, measured authors’ chosen metrics. Preprocessing across all them. Conclusions broad scope medical applications patient harms demand an emphasis evaluation However, adoption principles medicine remains inconsistent limited poor availability reporting. We recommend researchers journal editors emphasize standardized reporting improve transparency facilitate bias.

Language: Английский

Citations

114