Explainable AI: Enhancing Interpretability of Machine Learning Models DOI

Duru Kulaklıoğlu

Human computer interaction., Journal Year: 2024, Volume and Issue: 8(1), P. 91 - 91

Published: Dec. 6, 2024

Explainable Artificial Intelligence (XAI) is emerging as a critical field to address the “black box” nature of many machine learning (ML) models. While these models achieve high predictive accuracy, their opacity undermines trust, adoption, and ethical compliance in domains such healthcare, finance, autonomous systems. This research explores methodologies frameworks enhance interpretability ML models, focusing on techniques like feature attribution, surrogate counterfactual explanations. By balancing model complexity transparency, this study highlights strategies bridge gap between performance explainability. The integration XAI into workflows not only fosters trust but also aligns with regulatory requirements, enabling actionable insights for stakeholders. findings reveal roadmap design inherently interpretable tools post-hoc analysis, offering sustainable approach democratize AI.

Language: Английский

Integrating BERT Embeddings with SVM for Prostate Cancer Prediction DOI

A. S. M. M. R. Khan,

Fariba Tasnia Khan, Tanjim Mahmud

et al.

Published: May 2, 2024

Prostate cancer diagnosis is a critical area in oncology where accurate and timely identification of malignancy imperative for effective treatment. In this paper, we propose an approach that integrates BERT (Bidirectional Encoder Representations from Transformers) embeddings with SVM the task prostate diagnosis. Leveraging BERT's ability to capture complex contextual relationships within textual medical data, extract clinical features utilize RBF kernel construct robust classification model. SVM, its find clear decision boundaries, can provide The methodology validated on dataset containing diverse parameters associated cases. Our experimental results demonstrate efficacy proposed model, showcasing improved diagnostic accuracy compared traditional approaches. hybrid integrating both numerical features, demonstrated commendable 95%, outperforming final model 86% which solely relies data.

Language: Английский

Citations

1

Hybrid Deep Transfer Learning Framework for Humerus Fracture Detection and Classification from X-ray Images DOI
Puja Dey, Tanjim Mahmud,

Khan Md. Foysol

et al.

Published: June 21, 2024

Language: Английский

Citations

1

Anatomic Interpretability in Neuroimage Deep Learning: Saliency Approaches for Typical Aging and Traumatic Brain Injury DOI Creative Commons

Kevin Guo,

Nikhil N. Chaudhari, Tamara Jafar

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: Oct. 16, 2024

Abstract The black box nature of deep neural networks (DNNs) makes researchers and clinicians hesitant to rely on their findings. Saliency maps can enhance DNN explainability by suggesting the anatomic localization relevant brain features. This study compares seven popular attribution-based saliency approaches assign neuroanatomic interpretability DNNs that estimate biological age (BA) from magnetic resonance imaging (MRI). Cognitively normal (CN) adults (N = 13,394, 5,900 males; mean age: 65.82 ± 8.89 years) are included for training, testing, validation, map generation BA. To robustness presence deviations normality, also generated with mild traumatic injury (mTBI, \(\:N\) 214, 135 55.3 9.9 years). We assess methods’ capacities capture known features aging compare them a surrogate ground truth whose is a priori. Anatomic identified most reliably integrated gradients method, which outperforms all others through its ability localize Gradient Shapley additive explanations, input × gradient, masked gradient perform less consistently but still highlight ubiquitous (ventricle dilation, hippocampal atrophy, sulcal widening). methods involving saliency, guided backpropagation, gradient-weight class attribution mapping outside brain, undesirable. Our research suggests relative tradeoffs interpret findings during BA estimation in typical after mTBI.

Language: Английский

Citations

1

Anatomic Interpretability in Neuroimage Deep Learning: Saliency Approaches for Typical Aging and Traumatic Brain Injury DOI Creative Commons

Kevin Guo,

Nikhil N. Chaudhari, Tamara Jafar

et al.

Neuroinformatics, Journal Year: 2024, Volume and Issue: 22(4), P. 591 - 606

Published: Nov. 6, 2024

Abstract The black box nature of deep neural networks (DNNs) makes researchers and clinicians hesitant to rely on their findings. Saliency maps can enhance DNN explainability by suggesting the anatomic localization relevant brain features. This study compares seven popular attribution-based saliency approaches assign neuroanatomic interpretability DNNs that estimate biological age (BA) from magnetic resonance imaging (MRI). Cognitively normal (CN) adults ( N = 13,394, 5,900 males; mean age: 65.82 ± 8.89 years) are included for training, testing, validation, map generation BA. To robustness presence deviations normality, also generated with mild traumatic injury (mTBI, $$N$$ N 214, 135 55.3 9.9 years). We assess methods’ capacities capture known features aging compare them a surrogate ground truth whose is priori. Anatomic identified most reliably integrated gradients method, which outperforms all others through its ability localize Gradient Shapley additive explanations, input × gradient, masked gradient perform less consistently but still highlight ubiquitous (ventricle dilation, hippocampal atrophy, sulcal widening). methods involving saliency, guided backpropagation, gradient-weight class attribution mapping outside brain, undesirable. Our research suggests relative tradeoffs interpret findings during BA estimation in typical after mTBI.

Language: Английский

Citations

1

Explainable AI: Enhancing Interpretability of Machine Learning Models DOI

Duru Kulaklıoğlu

Human computer interaction., Journal Year: 2024, Volume and Issue: 8(1), P. 91 - 91

Published: Dec. 6, 2024

Explainable Artificial Intelligence (XAI) is emerging as a critical field to address the “black box” nature of many machine learning (ML) models. While these models achieve high predictive accuracy, their opacity undermines trust, adoption, and ethical compliance in domains such healthcare, finance, autonomous systems. This research explores methodologies frameworks enhance interpretability ML models, focusing on techniques like feature attribution, surrogate counterfactual explanations. By balancing model complexity transparency, this study highlights strategies bridge gap between performance explainability. The integration XAI into workflows not only fosters trust but also aligns with regulatory requirements, enabling actionable insights for stakeholders. findings reveal roadmap design inherently interpretable tools post-hoc analysis, offering sustainable approach democratize AI.

Language: Английский

Citations

1