Anatomic Interpretability in Neuroimage Deep Learning: Saliency Approaches for Typical Aging and Traumatic Brain Injury DOI Creative Commons

Kevin Guo,

Nikhil N. Chaudhari, Tamara Jafar

и другие.

Neuroinformatics, Год журнала: 2024, Номер 22(4), С. 591 - 606

Опубликована: Ноя. 6, 2024

Abstract The black box nature of deep neural networks (DNNs) makes researchers and clinicians hesitant to rely on their findings. Saliency maps can enhance DNN explainability by suggesting the anatomic localization relevant brain features. This study compares seven popular attribution-based saliency approaches assign neuroanatomic interpretability DNNs that estimate biological age (BA) from magnetic resonance imaging (MRI). Cognitively normal (CN) adults ( N = 13,394, 5,900 males; mean age: 65.82 ± 8.89 years) are included for training, testing, validation, map generation BA. To robustness presence deviations normality, also generated with mild traumatic injury (mTBI, $$N$$ N 214, 135 55.3 9.9 years). We assess methods’ capacities capture known features aging compare them a surrogate ground truth whose is priori. Anatomic identified most reliably integrated gradients method, which outperforms all others through its ability localize Gradient Shapley additive explanations, input × gradient, masked gradient perform less consistently but still highlight ubiquitous (ventricle dilation, hippocampal atrophy, sulcal widening). methods involving saliency, guided backpropagation, gradient-weight class attribution mapping outside brain, undesirable. Our research suggests relative tradeoffs interpret findings during BA estimation in typical after mTBI.

Язык: Английский

New Era of Intelligent Medicine: Future Scope and Challenges DOI

Ashwani Kumar,

Aanchal Gupta, Utkarsh Raj

и другие.

2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Год журнала: 2024, Номер unknown, С. 1 - 6

Опубликована: Март 14, 2024

The integration of Artificial Intelligence (AI) into the global healthcare landscape has undergone a remarkable transformation, presenting unprecedented opportunities and challenges. This review explores transformative impact in health care, examining current applications, growth projections, projected compound annual rate (CAGR) for AI market is 37%, reaching $188 billion by 2030. AI's potential to reduce drug development costs prevent medication dosing errors evident. From early models like CASNET contemporary Deep Learning, revolutionized medical diagnostics. envisions future with accessible through chatbots telemedicine, data-driven platforms personalized treatment, data cards. Technological advancements, including increased computational power cloud storage, play pivotal role, challenges managing vast heterogeneous data. concludes addressing dynamic must overcome impact.

Язык: Английский

Процитировано

1

Integrating BERT Embeddings with SVM for Prostate Cancer Prediction DOI

A. S. M. M. R. Khan,

Fariba Tasnia Khan, Tanjim Mahmud

и другие.

Опубликована: Май 2, 2024

Prostate cancer diagnosis is a critical area in oncology where accurate and timely identification of malignancy imperative for effective treatment. In this paper, we propose an approach that integrates BERT (Bidirectional Encoder Representations from Transformers) embeddings with SVM the task prostate diagnosis. Leveraging BERT's ability to capture complex contextual relationships within textual medical data, extract clinical features utilize RBF kernel construct robust classification model. SVM, its find clear decision boundaries, can provide The methodology validated on dataset containing diverse parameters associated cases. Our experimental results demonstrate efficacy proposed model, showcasing improved diagnostic accuracy compared traditional approaches. hybrid integrating both numerical features, demonstrated commendable 95%, outperforming final model 86% which solely relies data.

Язык: Английский

Процитировано

1

Hybrid Deep Transfer Learning Framework for Humerus Fracture Detection and Classification from X-ray Images DOI
Puja Dey, Tanjim Mahmud,

Khan Md. Foysol

и другие.

Опубликована: Июнь 21, 2024

Язык: Английский

Процитировано

1

Anatomic Interpretability in Neuroimage Deep Learning: Saliency Approaches for Typical Aging and Traumatic Brain Injury DOI Creative Commons

Kevin Guo,

Nikhil N. Chaudhari, Tamara Jafar

и другие.

Research Square (Research Square), Год журнала: 2024, Номер unknown

Опубликована: Окт. 16, 2024

Abstract The black box nature of deep neural networks (DNNs) makes researchers and clinicians hesitant to rely on their findings. Saliency maps can enhance DNN explainability by suggesting the anatomic localization relevant brain features. This study compares seven popular attribution-based saliency approaches assign neuroanatomic interpretability DNNs that estimate biological age (BA) from magnetic resonance imaging (MRI). Cognitively normal (CN) adults (N = 13,394, 5,900 males; mean age: 65.82 ± 8.89 years) are included for training, testing, validation, map generation BA. To robustness presence deviations normality, also generated with mild traumatic injury (mTBI, \(\:N\) 214, 135 55.3 9.9 years). We assess methods’ capacities capture known features aging compare them a surrogate ground truth whose is a priori. Anatomic identified most reliably integrated gradients method, which outperforms all others through its ability localize Gradient Shapley additive explanations, input × gradient, masked gradient perform less consistently but still highlight ubiquitous (ventricle dilation, hippocampal atrophy, sulcal widening). methods involving saliency, guided backpropagation, gradient-weight class attribution mapping outside brain, undesirable. Our research suggests relative tradeoffs interpret findings during BA estimation in typical after mTBI.

Язык: Английский

Процитировано

1

Anatomic Interpretability in Neuroimage Deep Learning: Saliency Approaches for Typical Aging and Traumatic Brain Injury DOI Creative Commons

Kevin Guo,

Nikhil N. Chaudhari, Tamara Jafar

и другие.

Neuroinformatics, Год журнала: 2024, Номер 22(4), С. 591 - 606

Опубликована: Ноя. 6, 2024

Abstract The black box nature of deep neural networks (DNNs) makes researchers and clinicians hesitant to rely on their findings. Saliency maps can enhance DNN explainability by suggesting the anatomic localization relevant brain features. This study compares seven popular attribution-based saliency approaches assign neuroanatomic interpretability DNNs that estimate biological age (BA) from magnetic resonance imaging (MRI). Cognitively normal (CN) adults ( N = 13,394, 5,900 males; mean age: 65.82 ± 8.89 years) are included for training, testing, validation, map generation BA. To robustness presence deviations normality, also generated with mild traumatic injury (mTBI, $$N$$ N 214, 135 55.3 9.9 years). We assess methods’ capacities capture known features aging compare them a surrogate ground truth whose is priori. Anatomic identified most reliably integrated gradients method, which outperforms all others through its ability localize Gradient Shapley additive explanations, input × gradient, masked gradient perform less consistently but still highlight ubiquitous (ventricle dilation, hippocampal atrophy, sulcal widening). methods involving saliency, guided backpropagation, gradient-weight class attribution mapping outside brain, undesirable. Our research suggests relative tradeoffs interpret findings during BA estimation in typical after mTBI.

Язык: Английский

Процитировано

1