Explainable AI: Enhancing Interpretability of Machine Learning Models DOI

Duru Kulaklıoğlu

Human computer interaction., Journal Year: 2024, Volume and Issue: 8(1), P. 91 - 91

Published: Dec. 6, 2024

Explainable Artificial Intelligence (XAI) is emerging as a critical field to address the “black box” nature of many machine learning (ML) models. While these models achieve high predictive accuracy, their opacity undermines trust, adoption, and ethical compliance in domains such healthcare, finance, autonomous systems. This research explores methodologies frameworks enhance interpretability ML models, focusing on techniques like feature attribution, surrogate counterfactual explanations. By balancing model complexity transparency, this study highlights strategies bridge gap between performance explainability. The integration XAI into workflows not only fosters trust but also aligns with regulatory requirements, enabling actionable insights for stakeholders. findings reveal roadmap design inherently interpretable tools post-hoc analysis, offering sustainable approach democratize AI.

Language: Английский

Exhaustive Study into Machine Learning and Deep Learning Methods for Multilingual Cyberbullying Detection in Bangla and Chittagonian Texts DOI Open Access
Tanjim Mahmud, Michał Ptaszyński, Fumito Masui

et al.

Electronics, Journal Year: 2024, Volume and Issue: 13(9), P. 1677 - 1677

Published: April 26, 2024

Cyberbullying is a serious problem in online communication. It important to find effective ways detect cyberbullying content make environments safer. In this paper, we investigated the identification of contents from Bangla and Chittagonian languages, which are both low-resource with latter being an extremely language. study, used traditional baseline machine learning methods, as well wide suite deep methods especially focusing on hybrid networks transformer-based multilingual models. For data, collected over 5000 text samples social media. Krippendorff’s alpha Cohen’s kappa were measure reliability dataset annotations. Traditional research achieved accuracies ranging 0.63 0.711, SVM emerging top performer. Furthermore, employing ensemble models such Bagging 0.70 accuracy, Boosting 0.69 Voting 0.72 accuracy yielded promising results. contrast, models, notably CNN, 0.811, thus outperforming ML approaches, CNN exhibiting highest accuracy. We also proposed series network-based including BiLSTM+GRU 0.799, CNN+LSTM 0.801 CNN+BiLSTM 0.78 CNN+GRU 0.804 Notably, most complex model, (CNN+LSTM)+BiLSTM, attained 0.82, showcasing efficacy architectures. explored XLM-Roberta 0.841 BERT 0.822 Multilingual 0.821 0.82 ELECTRA 0.785 showed significantly enhanced levels. Our analysis demonstrates that can be highly addressing pervasive issue several different linguistic contexts. show transformer efficiently circumvent language dependence plagues conventional transfer methods. findings suggest approaches embeddings effectively tackle across platforms.

Language: Английский

Citations

9

Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It DOI Creative Commons
Yasir Hafeez, Khuhed Memon, Maged S. Al-Quraishi

et al.

Diagnostics, Journal Year: 2025, Volume and Issue: 15(2), P. 168 - 168

Published: Jan. 13, 2025

Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, medical experts have working the direction designing developing computer aided diagnosis (CAD) tools serve as assistants doctors, their large-scale adoption integration healthcare system still seems far-fetched. Diagnostic radiology is no exception. Imagining techniques like magnetic resonance imaging (MRI), computed tomography (CT), positron emission (PET) scans widely very effectively employed by radiologists neurologists for differential diagnoses neurological disorders decades, yet AI-powered systems analyze such incorporated operating procedures systems. Why? It absolutely understandable that medicine, precious human lives are on line, hence there room even tiniest mistakes. Nevertheless, with advent explainable artificial (XAI), old-school black boxes deep learning (DL) unraveled. Would XAI be turning point finally embrace AI radiology? This review a humble endeavor find answers these questions. Methods: In this review, we present journey recognize, preprocess, brain MRI various disorders, special emphasis CAD embedded explainability. A comprehensive literature from 2017 2024 was conducted using host databases. We also domain experts’ opinions summarize challenges up ahead need addressed order fully exploit tremendous potential application diagnostics humanity. Results: Forty-seven studies were summarized tabulated information about technology datasets employed, along performance accuracies. The strengths weaknesses discussed. addition, seven around world presented guide engineers scientists tools. Conclusions: Current research observed focused enhancement accuracies DL regimens, less attention being paid authenticity usefulness explanations. shortage ground truth explainability observed. Visual explanation methods found dominate; however, they might enough, more thorough professor-like explanations would required build trust professionals. Special factors legal, ethical, safety, security issues can bridge current gap between routine practice.

Language: Английский

Citations

1

Prediction of Alzheimer's disease stages based on ResNet-Self-attention architecture with Bayesian optimization and best features selection DOI Creative Commons

Nabeela Yaqoob,

Muhammad Attique Khan,

Saleha Masood

et al.

Frontiers in Computational Neuroscience, Journal Year: 2024, Volume and Issue: 18

Published: April 25, 2024

Alzheimer's disease (AD) is a neurodegenerative illness that impairs cognition, function, and behavior by causing irreversible damage to multiple brain areas, including the hippocampus. The suffering of patients their family members will be lessened with an early diagnosis AD. automatic technique widely required due shortage medical experts eases burden staff. artificial intelligence (AI)-based computerized method can help achieve better accuracy precision rates. This study proposes new automated framework for AD stage prediction based on ResNet-Self architecture Fuzzy Entropy-controlled Path-Finding Algorithm (FEcPFA). A data augmentation has been utilized resolve dataset imbalance issue. In next step, we proposed deep-learning model self-attention module. ResNet-50 modified connected block important information extraction. hyperparameters were optimized using Bayesian optimization (BO) then train model, which was subsequently employed feature extracted features FEcPFA. best selected FEcPFA passed machine learning classifiers final classification. experimental process publicly available MRI achieved improved 99.9%. results compared state-of-the-art (SOTA) techniques, demonstrating improvement in terms time efficiency.

Language: Английский

Citations

5

Unraveling the Black Box: A Review of Explainable Deep Learning Healthcare Techniques DOI Creative Commons
Nafeesa Yousuf Murad, Mohd Hilmi Hasan, Muhammad Hamza Azam

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 66556 - 66568

Published: Jan. 1, 2024

Language: Английский

Citations

4

An Evolutionary Federated Learning Approach to Diagnose Alzheimer’s Disease Under Uncertainty DOI Creative Commons
Nanziba Basnin, Tanjim Mahmud, Raihan Ul Islam

et al.

Diagnostics, Journal Year: 2025, Volume and Issue: 15(1), P. 80 - 80

Published: Jan. 1, 2025

Background: Alzheimer’s disease (AD) leads to severe cognitive impairment and functional decline in patients, its exact cause remains unknown. Early diagnosis of AD is imperative enable timely interventions that can slow the progression disease. This research tackles complexity uncertainty by employing a multimodal approach integrates medical imaging demographic data. Methods: To scale this system larger environments, such as hospital settings, ensure sustainability, security, privacy sensitive data, employs both deep learning federated frameworks. MRI images are pre-processed fed into convolutional neural network (CNN), which generates prediction file. file then combined with data distributed among clients for local training. Training conducted locally globally using belief rule base (BRB), effectively various sources comprehensive diagnostic model. Results: The aggregated values from training collected on central server. Various aggregation methods evaluated assess performance model, results indicating FedAvg outperforms other methods, achieving global accuracy 99.9%. Conclusions: BRB manages associated providing robust framework integrating analyzing diverse information. not only advances diagnostics but also underscores potential scalable, privacy-preserving healthcare solutions.

Language: Английский

Citations

0

A Research Landscape Analysis on Alzheimer's Disease and Gerontechnology: Identifying Key Contributors, Hotspots, and Emerging Trends DOI Creative Commons
Azliyana Azizan,

Susi Endrini

Archives of Gerontology and Geriatrics Plus, Journal Year: 2025, Volume and Issue: unknown, P. 100125 - 100125

Published: Jan. 1, 2025

Language: Английский

Citations

0

Explainable AI for Bipolar Disorder Diagnosis Using Hjorth Parameters DOI Creative Commons

Mehrnaz Saghab Torbati,

Ahmad Zandbagleh, Mohammad Reza Daliri

et al.

Diagnostics, Journal Year: 2025, Volume and Issue: 15(3), P. 316 - 316

Published: Jan. 29, 2025

Background: Despite the prevalence and severity of bipolar disorder (BD), current diagnostic approaches remain largely subjective. This study presents an automatic framework using electroencephalography (EEG)-derived Hjorth parameters (activity, mobility, complexity), aiming to establish objective neurophysiological markers for BD detection provide insights into its underlying neural mechanisms. Methods: Using resting-state eyes-closed EEG data collected from 20 patients healthy controls (HCs), we developed a novel approach based on extracted across multiple frequency bands. We employed rigorous leave-one-subject-out cross-validation strategy ensure robust, subject-independent assessment, combined with explainable artificial intelligence (XAI) identify most discriminative features. Results: Our achieved remarkable classification accuracy (92.05%), activity beta gamma bands emerging as XAI analysis revealed that anterior brain regions in these higher contributed significantly detection, providing new BD. Conclusions: demonstrates exceptional utility parameters, particularly ranges regions, detection. findings not only promising automated diagnosis but also offer valuable basis related disorders. The robust performance interpretability our suggest potential clinical tool diagnosis.

Language: Английский

Citations

0

Explainable Artificial Intelligence in Neuroimaging of Alzheimer’s Disease DOI Creative Commons

Mahdieh Taiyeb Khosroshahi,

Soroush Morsali, Sohrab Gharakhanlou

et al.

Diagnostics, Journal Year: 2025, Volume and Issue: 15(5), P. 612 - 612

Published: March 4, 2025

Alzheimer's disease (AD) remains a significant global health challenge, affecting millions worldwide and imposing substantial burdens on healthcare systems. Advances in artificial intelligence (AI), particularly deep learning machine learning, have revolutionized neuroimaging-based AD diagnosis. However, the complexity lack of interpretability these models limit their clinical applicability. Explainable Artificial Intelligence (XAI) addresses this challenge by providing insights into model decision-making, enhancing transparency, fostering trust AI-driven diagnostics. This review explores role XAI neuroimaging, highlighting key techniques such as SHAP, LIME, Grad-CAM, Layer-wise Relevance Propagation (LRP). We examine applications identifying critical biomarkers, tracking progression, distinguishing stages using various imaging modalities, including MRI PET. Additionally, we discuss current challenges, dataset limitations, regulatory concerns, standardization issues, propose future research directions to improve XAI's integration practice. By bridging gap between AI interpretability, holds potential refine diagnostics, personalize treatment strategies, advance research.

Language: Английский

Citations

0

Explainable MRI-Based Ensemble Learnable Architecture for Alzheimer’s Disease Detection DOI Creative Commons
Opeyemi Adeniran, Blessing Ojeme,

Temitope Ezekiel Ajibola

et al.

Algorithms, Journal Year: 2025, Volume and Issue: 18(3), P. 163 - 163

Published: March 13, 2025

With the advancements in deep learning methods, AI systems now perform at same or higher level than human intelligence many complex real-world problems. The data and algorithmic opacity of models, however, make task comprehending input information, model, model’s decisions quite challenging. This lack transparency constitutes both a practical an ethical issue. For present study, it is major drawback to deployment methods mandated with detecting patterns prognosticating Alzheimer’s disease. Many approaches presented medical literature for overcoming this critical weakness are sometimes cost sacrificing accuracy interpretability. study attempt addressing challenge fostering reliability AI-driven healthcare solutions. explores few commonly used perturbation-based interpretability (LIME) gradient-based (Saliency Grad-CAM) visualizing explaining dataset, MRI image-based disease identification using diagnostic predictive strengths ensemble framework comprising Convolutional Neural Networks (CNNs) architectures (Custom multi-classifier CNN, VGG-19, ResNet, MobileNet, EfficientNet, DenseNet), Vision Transformer (ViT). experimental results show stacking achieving remarkable 98.0% while hard voting reached 97.0%. findings valuable contribution growing field explainable artificial (XAI) imaging, helping end users researchers gain understanding backstory behind image dataset decisions.

Language: Английский

Citations

0

An Explainable AI for Blood Image Classification With Dynamic CNN Model Selection Framework DOI

Datenji Sherpa,

Dibakar Raj Pant

International Journal of Imaging Systems and Technology, Journal Year: 2025, Volume and Issue: 35(3)

Published: April 11, 2025

ABSTRACT Explainable AI (XAI) frameworks are becoming essential in many areas, including the medical field, as they help us to understand decisions, increasing clinical trust and improving patient care. This research presents a robust comprehensive framework. To classify images from BloodMNIST Raabin‐WBC datasets, various pre‐trained convolutional neural network (CNN) architectures: VGG, ResNet, DenseNet, EfficientNet, MobileNet variants, SqueezeNet, Xception implemented both individually combination with SpinalNet. For parameter analysis, four models, VGG16, VGG19, ResNet50, ResNet101, were combined Notably, these SpinalNet hybrid models significantly reduced model parameters while maintaining or even accuracy. example, VGG 16 + shows 40.74% reduction accuracy of 98.92% (BloodMnist) 98.32% (Raabin‐WBC). Similarly, combinations ResNet101 resulted weight reductions by 36.36%, 65.33%, 52.13%, respectively, improved for datasets. These highly efficient well‐suited resource‐limited environments. The authors have developed dynamic selection framework optimally selects best based on prediction scores, prioritizing lightweight cases ties. method guarantees that every input, most effective is used, which results higher well better outcomes. techniques: Local Interpretable Model‐agnostic Explanations (LIME), SHapley Additive ExPlanations (SHAP), Gradient‐weighted Class Activation Mapping (Grad‐CAM) implemented. key features influence predictions. By combining XAI methods selection, this not only achieves excellent but also provides useful insights into elements

Language: Английский

Citations

0