MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care DOI Creative Commons
Tina Hernandez‐Boussard, Selen Bozkurt, John P. A. Ioannidis

et al.

Journal of the American Medical Informatics Association, Journal Year: 2020, Volume and Issue: 27(12), P. 2011 - 2015

Published: April 29, 2020

The rise of digital data and computing power have contributed to significant advancements in artificial intelligence (AI), leading the use classification prediction models health care enhance clinical decision-making for diagnosis, treatment prognosis. However, such advances are limited by lack reporting standards used develop those models, model architecture, evaluation validation processes. Here, we present MINIMAR (MINimum Information Medical AI Reporting), a proposal describing minimum information necessary understand intended predictions, target populations, hidden biases, ability generalize these emerging technologies. We call standard accurately responsibly report on care. This will facilitate design implementation promote development associated decision support tools, as well manage concerns regarding accuracy bias.

Language: Английский

10 years of health-care reform in China: progress and gaps in Universal Health Coverage DOI
Winnie Yip, Hongqiao Fu, Angela T. Chen

et al.

The Lancet, Journal Year: 2019, Volume and Issue: 394(10204), P. 1192 - 1204

Published: Sept. 1, 2019

Language: Английский

Citations

875

Do no harm: a roadmap for responsible machine learning for health care DOI
Jenna Wiens, Suchi Saria, Mark Sendak

et al.

Nature Medicine, Journal Year: 2019, Volume and Issue: 25(9), P. 1337 - 1340

Published: Aug. 19, 2019

Language: Английский

Citations

694

Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine DOI Creative Commons
Zeeshan Ahmed, Khalid Gaffer Mohamed, Saman Zeeshan

et al.

Database, Journal Year: 2020, Volume and Issue: 2020

Published: Jan. 1, 2020

Precision medicine is one of the recent and powerful developments in medical care, which has potential to improve traditional symptom-driven practice medicine, allowing earlier interventions using advanced diagnostics tailoring better economically personalized treatments. Identifying best pathway population involves ability analyze comprehensive patient information together with broader aspects monitor distinguish between sick relatively healthy people, will lead a understanding biological indicators that can signal shifts health. While complexities disease at individual level have made it difficult utilize healthcare clinical decision-making, some existing constraints been greatly minimized by technological advancements. To implement effective precision enhanced positively impact outcomes provide real-time decision support, important harness power electronic health records integrating disparate data sources discovering patient-specific patterns progression. Useful analytic tools, technologies, databases, approaches are required augment networking interoperability clinical, laboratory public systems, as well addressing ethical social issues related privacy protection balance. Developing multifunctional machine learning platforms for extraction, aggregation, management analysis support clinicians efficiently stratifying subjects understand specific scenarios optimize decision-making. Implementation artificial intelligence compelling vision leading significant improvements achieving goals providing real-time, lower costs. In this study, we focused on analyzing discussing various published solutions, perspectives, aiming advance academic solutions paving way new data-centric era discovery healthcare.

Language: Английский

Citations

646

Human–computer collaboration for skin cancer recognition DOI Open Access
Philipp Tschandl, Claus Rinner, Zoé Apalla

et al.

Nature Medicine, Journal Year: 2020, Volume and Issue: 26(8), P. 1229 - 1234

Published: June 22, 2020

Language: Английский

Citations

622

Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension DOI Creative Commons
Xiaoxuan Liu, Samantha Cruz Rivera, David Moher

et al.

Nature Medicine, Journal Year: 2020, Volume and Issue: 26(9), P. 1364 - 1374

Published: Sept. 1, 2020

Abstract The CONSORT 2010 statement provides minimum guidelines for reporting randomized trials. Its widespread use has been instrumental in ensuring transparency the evaluation of new interventions. More recently, there a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective demonstrate impact on health outcomes. CONSORT-AI (Consolidated Standards Reporting Trials–Artificial Intelligence) extension is guideline clinical trials evaluating with an AI component. It was developed parallel its companion trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations Interventional Intelligence). Both were through staged consensus process literature review and expert consultation generate 29 candidate items, which assessed by international multi-stakeholder group two-stage Delphi survey (103 stakeholders), agreed upon two-day meeting (31 stakeholders) refined checklist pilot (34 participants). includes 14 items considered sufficiently important they should be routinely reported addition core items. recommends investigators provide clear descriptions intervention, including instructions skills required use, setting intervention integrated, handling inputs outputs human–AI interaction provision analysis error cases. will help promote completeness assist editors peer reviewers, as well general readership, understand, interpret critically appraise quality design risk bias

Language: Английский

Citations

620

Swarm Learning for decentralized and confidential clinical machine learning DOI Creative Commons
Stefanie Warnat‐Herresthal, Hartmut Schultze,

Krishnaprasad Lingadahalli Shastry

et al.

Nature, Journal Year: 2021, Volume and Issue: 594(7862), P. 265 - 270

Published: May 26, 2021

Fast and reliable detection of patients with severe heterogeneous illnesses is a major goal precision medicine1,2. Patients leukaemia can be identified using machine learning on the basis their blood transcriptomes3. However, there an increasing divide between what technically possible allowed, because privacy legislation4,5. Here, to facilitate integration any medical data from owner worldwide without violating laws, we introduce Swarm Learning-a decentralized machine-learning approach that unites edge computing, blockchain-based peer-to-peer networking coordination while maintaining confidentiality need for central coordinator, thereby going beyond federated learning. To illustrate feasibility Learning develop disease classifiers distributed data, chose four use cases diseases (COVID-19, tuberculosis, lung pathologies). With more than 16,400 transcriptomes derived 127 clinical studies non-uniform distributions controls substantial study biases, as well 95,000 chest X-ray images, show outperform those developed at individual sites. In addition, completely fulfils local regulations by design. We believe this will notably accelerate introduction medicine.

Language: Английский

Citations

601

Artificial intelligence and the future of global health DOI Creative Commons
Nina Schwalbe, Brian Wahl

The Lancet, Journal Year: 2020, Volume and Issue: 395(10236), P. 1579 - 1586

Published: May 1, 2020

Language: Английский

Citations

597

Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study DOI Creative Commons
Tom Nadarzynski, Oliver Miles,

Aimee Cowie

et al.

Digital Health, Journal Year: 2019, Volume and Issue: 5

Published: Jan. 1, 2019

Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots required predict uptake; however, few studies date have explored their acceptability. This research aimed explore participants' willingness engage with AI-led chatbots.The study incorporated semi-structured interviews (N-29) which informed development an online survey (N-216) advertised via social media. Interviews were recorded, transcribed verbatim analysed thematically. A 24 items demographic attitudinal variables, including acceptability perceived utility. The quantitative data using binary regressions a single categorical predictor.Three broad themes: 'Understanding chatbots', 'AI hesitancy' 'Motivations for chatbots' identified, outlining concerns about accuracy, cyber-security, inability services empathise. showed moderate (67%), correlated negatively poorer IT skills OR = 0.32 [CI95%:0.13-0.78] dislike talking computers 0.77 [CI95%:0.60-0.99] well positively utility 5.10 [CI95%:3.08-8.43], positive attitude 2.71 [CI95%:1.77-4.16] trustworthiness 1.92 [CI95%:1.13-3.25].Most internet users would be receptive chatbots, although hesitancy regarding this technology likely compromise engagement. Intervention designers focusing on need employ user-centred theory-based approaches addressing patients' optimising user experience order achieve best uptake utilisation. Patients' perspectives, capabilities taken into account when developing assessing effectiveness chatbots.

Language: Английский

Citations

586

Artificial intelligence in cancer diagnosis and prognosis: Opportunities and challenges DOI Creative Commons
Shigao Huang, Jie Yang, Simon Fong

et al.

Cancer Letters, Journal Year: 2019, Volume and Issue: 471, P. 61 - 71

Published: Dec. 10, 2019

Language: Английский

Citations

497

The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies DOI Creative Commons
Aniek F. Markus, Jan A. Kors, Peter R. Rijnbeek

et al.

Journal of Biomedical Informatics, Journal Year: 2020, Volume and Issue: 113, P. 103655 - 103655

Published: Dec. 10, 2020

Artificial intelligence (AI) has huge potential to improve the health and well-being of people, but adoption in clinical practice is still limited. Lack transparency identified as one main barriers implementation, clinicians should be confident AI system can trusted. Explainable overcome this issue a step towards trustworthy AI. In paper we review recent literature provide guidance researchers practitioners on design explainable systems for health-care domain contribute formalization field We argue reason demand explainability determines what explained relative importance properties (i.e. interpretability fidelity). Based this, propose framework guide choice between classes methods (explainable modelling versus post-hoc explanation; model-based, attribution-based, or example-based explanations; global local explanations). Furthermore, find that quantitative evaluation metrics, which are important objective standardized evaluation, lacking some (e.g. clarity) types explanations methods). conclude AI, benefits need proven complementary measures might needed create care reporting data quality, performing extensive (external) validation, regulation).

Language: Английский

Citations

486