Bias in medical AI: Implications for clinical decision-making DOI Creative Commons
James M. Cross,

Michael A. Choma,

John A. Onofrey

и другие.

PLOS Digital Health, Год журнала: 2024, Номер 3(11), С. e0000651 - e0000651

Опубликована: Ноя. 7, 2024

Biases in medical artificial intelligence (AI) arise and compound throughout the AI lifecycle. These biases can have significant clinical consequences, especially applications that involve decision-making. Left unaddressed, biased lead to substandard decisions perpetuation exacerbation of longstanding healthcare disparities. We discuss potential at different stages development pipeline how they affect algorithms Bias occur data features labels, model evaluation, deployment, publication. Insufficient sample sizes for certain patient groups result suboptimal performance, algorithm underestimation, clinically unmeaningful predictions. Missing findings also produce behavior, including capturable but nonrandomly missing data, such as diagnosis codes, is not usually or easily captured, social determinants health. Expertly annotated labels used train supervised learning models may reflect implicit cognitive care practices. Overreliance on performance metrics during obscure bias diminish a model's utility. When applied outside training cohort, deteriorate from previous validation do so differentially across subgroups. How end users interact with deployed solutions introduce bias. Finally, where are developed published, by whom, impacts trajectories priorities future development. Solutions mitigate must be implemented care, which include collection large diverse sets, statistical debiasing methods, thorough emphasis interpretability, standardized reporting transparency requirements. Prior real-world implementation settings, rigorous through trials critical demonstrate unbiased application. Addressing crucial ensuring all patients benefit equitably AI.

Язык: Английский

Fruit and vegetable disease detection and classification: Recent trends, challenges, and future opportunities DOI
Sachin Kumar Gupta, Ashish Kumar Tripathi

Engineering Applications of Artificial Intelligence, Год журнала: 2024, Номер 133, С. 108260 - 108260

Опубликована: Март 14, 2024

Язык: Английский

Процитировано

20

“I Wonder if my Years of Training and Expertise Will be Devalued by Machines”: Concerns About the Replacement of Medical Professionals by Artificial Intelligence DOI Creative Commons
Moustaq Karim Khan Rony, Mst. Rina Parvin, Md. Wahiduzzaman

и другие.

SAGE Open Nursing, Год журнала: 2024, Номер 10

Опубликована: Янв. 1, 2024

The rapid integration of artificial intelligence (AI) into healthcare has raised concerns among professionals about the potential displacement human medical by AI technologies. However, apprehensions and perspectives workers regarding substitution them with are unknown.

Язык: Английский

Процитировано

20

Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction DOI
Melkamu Mersha, Khang Nhứt Lâm, Joseph Wood

и другие.

Neurocomputing, Год журнала: 2024, Номер 599, С. 128111 - 128111

Опубликована: Сен. 1, 2024

Язык: Английский

Процитировано

20

Improved prostate cancer diagnosis using a modified ResNet50-based deep learning architecture DOI Creative Commons
Fatma M. Talaat, Shaker El–Sappagh, Khaled Alnowaiser

и другие.

BMC Medical Informatics and Decision Making, Год журнала: 2024, Номер 24(1)

Опубликована: Янв. 24, 2024

Abstract Prostate cancer, the most common cancer in men, is influenced by age, family history, genetics, and lifestyle factors. Early detection of prostate using screening methods improves outcomes, but balance between overdiagnosis early remains debated. Using Deep Learning (DL) algorithms for offers a promising solution accurate efficient diagnosis, particularly cases where imaging challenging. In this paper, we propose Cancer Detection Model (PCDM) model automatic diagnosis cancer. It proves its clinical applicability to aid management real-world healthcare environments. The PCDM modified ResNet50-based architecture that integrates faster R-CNN dual optimizers improve performance process. trained on large dataset annotated medical images, experimental results show proposed outperforms both ResNet50 VGG19 architectures. Specifically, achieves high sensitivity, specificity, precision, accuracy rates 97.40%, 97.09%, 97.56%, 95.24%, respectively.

Язык: Английский

Процитировано

19

Bias in medical AI: Implications for clinical decision-making DOI Creative Commons
James M. Cross,

Michael A. Choma,

John A. Onofrey

и другие.

PLOS Digital Health, Год журнала: 2024, Номер 3(11), С. e0000651 - e0000651

Опубликована: Ноя. 7, 2024

Biases in medical artificial intelligence (AI) arise and compound throughout the AI lifecycle. These biases can have significant clinical consequences, especially applications that involve decision-making. Left unaddressed, biased lead to substandard decisions perpetuation exacerbation of longstanding healthcare disparities. We discuss potential at different stages development pipeline how they affect algorithms Bias occur data features labels, model evaluation, deployment, publication. Insufficient sample sizes for certain patient groups result suboptimal performance, algorithm underestimation, clinically unmeaningful predictions. Missing findings also produce behavior, including capturable but nonrandomly missing data, such as diagnosis codes, is not usually or easily captured, social determinants health. Expertly annotated labels used train supervised learning models may reflect implicit cognitive care practices. Overreliance on performance metrics during obscure bias diminish a model's utility. When applied outside training cohort, deteriorate from previous validation do so differentially across subgroups. How end users interact with deployed solutions introduce bias. Finally, where are developed published, by whom, impacts trajectories priorities future development. Solutions mitigate must be implemented care, which include collection large diverse sets, statistical debiasing methods, thorough emphasis interpretability, standardized reporting transparency requirements. Prior real-world implementation settings, rigorous through trials critical demonstrate unbiased application. Addressing crucial ensuring all patients benefit equitably AI.

Язык: Английский

Процитировано

19