Anatomic Interpretability in Neuroimage Deep Learning: Saliency Approaches for Typical Aging and Traumatic Brain Injury DOI Creative Commons

Kevin Guo,

Nikhil N. Chaudhari, Tamara Jafar

и другие.

Neuroinformatics, Год журнала: 2024, Номер 22(4), С. 591 - 606

Опубликована: Ноя. 6, 2024

Abstract The black box nature of deep neural networks (DNNs) makes researchers and clinicians hesitant to rely on their findings. Saliency maps can enhance DNN explainability by suggesting the anatomic localization relevant brain features. This study compares seven popular attribution-based saliency approaches assign neuroanatomic interpretability DNNs that estimate biological age (BA) from magnetic resonance imaging (MRI). Cognitively normal (CN) adults ( N = 13,394, 5,900 males; mean age: 65.82 ± 8.89 years) are included for training, testing, validation, map generation BA. To robustness presence deviations normality, also generated with mild traumatic injury (mTBI, $$N$$ N 214, 135 55.3 9.9 years). We assess methods’ capacities capture known features aging compare them a surrogate ground truth whose is priori. Anatomic identified most reliably integrated gradients method, which outperforms all others through its ability localize Gradient Shapley additive explanations, input × gradient, masked gradient perform less consistently but still highlight ubiquitous (ventricle dilation, hippocampal atrophy, sulcal widening). methods involving saliency, guided backpropagation, gradient-weight class attribution mapping outside brain, undesirable. Our research suggests relative tradeoffs interpret findings during BA estimation in typical after mTBI.

Язык: Английский

Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It DOI Creative Commons
Yasir Hafeez, Khuhed Memon, Maged S. Al-Quraishi

и другие.

Diagnostics, Год журнала: 2025, Номер 15(2), С. 168 - 168

Опубликована: Янв. 13, 2025

Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, medical experts have working the direction designing developing computer aided diagnosis (CAD) tools serve as assistants doctors, their large-scale adoption integration healthcare system still seems far-fetched. Diagnostic radiology is no exception. Imagining techniques like magnetic resonance imaging (MRI), computed tomography (CT), positron emission (PET) scans widely very effectively employed by radiologists neurologists for differential diagnoses neurological disorders decades, yet AI-powered systems analyze such incorporated operating procedures systems. Why? It absolutely understandable that medicine, precious human lives are on line, hence there room even tiniest mistakes. Nevertheless, with advent explainable artificial (XAI), old-school black boxes deep learning (DL) unraveled. Would XAI be turning point finally embrace AI radiology? This review a humble endeavor find answers these questions. Methods: In this review, we present journey recognize, preprocess, brain MRI various disorders, special emphasis CAD embedded explainability. A comprehensive literature from 2017 2024 was conducted using host databases. We also domain experts’ opinions summarize challenges up ahead need addressed order fully exploit tremendous potential application diagnostics humanity. Results: Forty-seven studies were summarized tabulated information about technology datasets employed, along performance accuracies. The strengths weaknesses discussed. addition, seven around world presented guide engineers scientists tools. Conclusions: Current research observed focused enhancement accuracies DL regimens, less attention being paid authenticity usefulness explanations. shortage ground truth explainability observed. Visual explanation methods found dominate; however, they might enough, more thorough professor-like explanations would required build trust professionals. Special factors legal, ethical, safety, security issues can bridge current gap between routine practice.

Язык: Английский

Процитировано

2

Exhaustive Study into Machine Learning and Deep Learning Methods for Multilingual Cyberbullying Detection in Bangla and Chittagonian Texts DOI Open Access
Tanjim Mahmud, Michał Ptaszyński, Fumito Masui

и другие.

Electronics, Год журнала: 2024, Номер 13(9), С. 1677 - 1677

Опубликована: Апрель 26, 2024

Cyberbullying is a serious problem in online communication. It important to find effective ways detect cyberbullying content make environments safer. In this paper, we investigated the identification of contents from Bangla and Chittagonian languages, which are both low-resource with latter being an extremely language. study, used traditional baseline machine learning methods, as well wide suite deep methods especially focusing on hybrid networks transformer-based multilingual models. For data, collected over 5000 text samples social media. Krippendorff’s alpha Cohen’s kappa were measure reliability dataset annotations. Traditional research achieved accuracies ranging 0.63 0.711, SVM emerging top performer. Furthermore, employing ensemble models such Bagging 0.70 accuracy, Boosting 0.69 Voting 0.72 accuracy yielded promising results. contrast, models, notably CNN, 0.811, thus outperforming ML approaches, CNN exhibiting highest accuracy. We also proposed series network-based including BiLSTM+GRU 0.799, CNN+LSTM 0.801 CNN+BiLSTM 0.78 CNN+GRU 0.804 Notably, most complex model, (CNN+LSTM)+BiLSTM, attained 0.82, showcasing efficacy architectures. explored XLM-Roberta 0.841 BERT 0.822 Multilingual 0.821 0.82 ELECTRA 0.785 showed significantly enhanced levels. Our analysis demonstrates that can be highly addressing pervasive issue several different linguistic contexts. show transformer efficiently circumvent language dependence plagues conventional transfer methods. findings suggest approaches embeddings effectively tackle across platforms.

Язык: Английский

Процитировано

9

Explainable Artificial Intelligence in Neuroimaging of Alzheimer’s Disease DOI Creative Commons

Mahdieh Taiyeb Khosroshahi,

Soroush Morsali, Sohrab Gharakhanlou

и другие.

Diagnostics, Год журнала: 2025, Номер 15(5), С. 612 - 612

Опубликована: Март 4, 2025

Alzheimer's disease (AD) remains a significant global health challenge, affecting millions worldwide and imposing substantial burdens on healthcare systems. Advances in artificial intelligence (AI), particularly deep learning machine learning, have revolutionized neuroimaging-based AD diagnosis. However, the complexity lack of interpretability these models limit their clinical applicability. Explainable Artificial Intelligence (XAI) addresses this challenge by providing insights into model decision-making, enhancing transparency, fostering trust AI-driven diagnostics. This review explores role XAI neuroimaging, highlighting key techniques such as SHAP, LIME, Grad-CAM, Layer-wise Relevance Propagation (LRP). We examine applications identifying critical biomarkers, tracking progression, distinguishing stages using various imaging modalities, including MRI PET. Additionally, we discuss current challenges, dataset limitations, regulatory concerns, standardization issues, propose future research directions to improve XAI's integration practice. By bridging gap between AI interpretability, holds potential refine diagnostics, personalize treatment strategies, advance research.

Язык: Английский

Процитировано

1

Prediction of Alzheimer's disease stages based on ResNet-Self-attention architecture with Bayesian optimization and best features selection DOI Creative Commons

Nabeela Yaqoob,

Muhammad Attique Khan,

Saleha Masood

и другие.

Frontiers in Computational Neuroscience, Год журнала: 2024, Номер 18

Опубликована: Апрель 25, 2024

Alzheimer's disease (AD) is a neurodegenerative illness that impairs cognition, function, and behavior by causing irreversible damage to multiple brain areas, including the hippocampus. The suffering of patients their family members will be lessened with an early diagnosis AD. automatic technique widely required due shortage medical experts eases burden staff. artificial intelligence (AI)-based computerized method can help achieve better accuracy precision rates. This study proposes new automated framework for AD stage prediction based on ResNet-Self architecture Fuzzy Entropy-controlled Path-Finding Algorithm (FEcPFA). A data augmentation has been utilized resolve dataset imbalance issue. In next step, we proposed deep-learning model self-attention module. ResNet-50 modified connected block important information extraction. hyperparameters were optimized using Bayesian optimization (BO) then train model, which was subsequently employed feature extracted features FEcPFA. best selected FEcPFA passed machine learning classifiers final classification. experimental process publicly available MRI achieved improved 99.9%. results compared state-of-the-art (SOTA) techniques, demonstrating improvement in terms time efficiency.

Язык: Английский

Процитировано

6

Unraveling the Black Box: A Review of Explainable Deep Learning Healthcare Techniques DOI Creative Commons
Nafeesa Yousuf Murad, Mohd Hilmi Hasan, Muhammad Hamza Azam

и другие.

IEEE Access, Год журнала: 2024, Номер 12, С. 66556 - 66568

Опубликована: Янв. 1, 2024

Язык: Английский

Процитировано

4

An Evolutionary Federated Learning Approach to Diagnose Alzheimer’s Disease Under Uncertainty DOI Creative Commons
Nanziba Basnin, Tanjim Mahmud, Raihan Ul Islam

и другие.

Diagnostics, Год журнала: 2025, Номер 15(1), С. 80 - 80

Опубликована: Янв. 1, 2025

Background: Alzheimer’s disease (AD) leads to severe cognitive impairment and functional decline in patients, its exact cause remains unknown. Early diagnosis of AD is imperative enable timely interventions that can slow the progression disease. This research tackles complexity uncertainty by employing a multimodal approach integrates medical imaging demographic data. Methods: To scale this system larger environments, such as hospital settings, ensure sustainability, security, privacy sensitive data, employs both deep learning federated frameworks. MRI images are pre-processed fed into convolutional neural network (CNN), which generates prediction file. file then combined with data distributed among clients for local training. Training conducted locally globally using belief rule base (BRB), effectively various sources comprehensive diagnostic model. Results: The aggregated values from training collected on central server. Various aggregation methods evaluated assess performance model, results indicating FedAvg outperforms other methods, achieving global accuracy 99.9%. Conclusions: BRB manages associated providing robust framework integrating analyzing diverse information. not only advances diagnostics but also underscores potential scalable, privacy-preserving healthcare solutions.

Язык: Английский

Процитировано

0

A Research Landscape Analysis on Alzheimer's Disease and Gerontechnology: Identifying Key Contributors, Hotspots, and Emerging Trends DOI Creative Commons
Azliyana Azizan,

Susi Endrini

Archives of Gerontology and Geriatrics Plus, Год журнала: 2025, Номер unknown, С. 100125 - 100125

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Explainable AI for Bipolar Disorder Diagnosis Using Hjorth Parameters DOI Creative Commons

Mehrnaz Saghab Torbati,

Ahmad Zandbagleh, Mohammad Reza Daliri

и другие.

Diagnostics, Год журнала: 2025, Номер 15(3), С. 316 - 316

Опубликована: Янв. 29, 2025

Background: Despite the prevalence and severity of bipolar disorder (BD), current diagnostic approaches remain largely subjective. This study presents an automatic framework using electroencephalography (EEG)-derived Hjorth parameters (activity, mobility, complexity), aiming to establish objective neurophysiological markers for BD detection provide insights into its underlying neural mechanisms. Methods: Using resting-state eyes-closed EEG data collected from 20 patients healthy controls (HCs), we developed a novel approach based on extracted across multiple frequency bands. We employed rigorous leave-one-subject-out cross-validation strategy ensure robust, subject-independent assessment, combined with explainable artificial intelligence (XAI) identify most discriminative features. Results: Our achieved remarkable classification accuracy (92.05%), activity beta gamma bands emerging as XAI analysis revealed that anterior brain regions in these higher contributed significantly detection, providing new BD. Conclusions: demonstrates exceptional utility parameters, particularly ranges regions, detection. findings not only promising automated diagnosis but also offer valuable basis related disorders. The robust performance interpretability our suggest potential clinical tool diagnosis.

Язык: Английский

Процитировано

0

Explainable MRI-Based Ensemble Learnable Architecture for Alzheimer’s Disease Detection DOI Creative Commons
Opeyemi Adeniran, Blessing Ojeme,

Temitope Ezekiel Ajibola

и другие.

Algorithms, Год журнала: 2025, Номер 18(3), С. 163 - 163

Опубликована: Март 13, 2025

With the advancements in deep learning methods, AI systems now perform at same or higher level than human intelligence many complex real-world problems. The data and algorithmic opacity of models, however, make task comprehending input information, model, model’s decisions quite challenging. This lack transparency constitutes both a practical an ethical issue. For present study, it is major drawback to deployment methods mandated with detecting patterns prognosticating Alzheimer’s disease. Many approaches presented medical literature for overcoming this critical weakness are sometimes cost sacrificing accuracy interpretability. study attempt addressing challenge fostering reliability AI-driven healthcare solutions. explores few commonly used perturbation-based interpretability (LIME) gradient-based (Saliency Grad-CAM) visualizing explaining dataset, MRI image-based disease identification using diagnostic predictive strengths ensemble framework comprising Convolutional Neural Networks (CNNs) architectures (Custom multi-classifier CNN, VGG-19, ResNet, MobileNet, EfficientNet, DenseNet), Vision Transformer (ViT). experimental results show stacking achieving remarkable 98.0% while hard voting reached 97.0%. findings valuable contribution growing field explainable artificial (XAI) imaging, helping end users researchers gain understanding backstory behind image dataset decisions.

Язык: Английский

Процитировано

0

Enhancing Alzheimer’s Disease Detection: An Explainable Machine Learning Approach with Ensemble Techniques DOI Creative Commons
Eram Mahamud, Md Assaduzzaman,

Jahirul Islam

и другие.

Intelligence-Based Medicine, Год журнала: 2025, Номер unknown, С. 100240 - 100240

Опубликована: Апрель 1, 2025

Язык: Английский

Процитировано

0