Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It DOI Creative Commons
Yasir Hafeez, Khuhed Memon, Maged S. Al-Quraishi

и другие.

Diagnostics, Год журнала: 2025, Номер 15(2), С. 168 - 168

Опубликована: Янв. 13, 2025

Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, medical experts have working the direction designing developing computer aided diagnosis (CAD) tools serve as assistants doctors, their large-scale adoption integration healthcare system still seems far-fetched. Diagnostic radiology is no exception. Imagining techniques like magnetic resonance imaging (MRI), computed tomography (CT), positron emission (PET) scans widely very effectively employed by radiologists neurologists for differential diagnoses neurological disorders decades, yet AI-powered systems analyze such incorporated operating procedures systems. Why? It absolutely understandable that medicine, precious human lives are on line, hence there room even tiniest mistakes. Nevertheless, with advent explainable artificial (XAI), old-school black boxes deep learning (DL) unraveled. Would XAI be turning point finally embrace AI radiology? This review a humble endeavor find answers these questions. Methods: In this review, we present journey recognize, preprocess, brain MRI various disorders, special emphasis CAD embedded explainability. A comprehensive literature from 2017 2024 was conducted using host databases. We also domain experts’ opinions summarize challenges up ahead need addressed order fully exploit tremendous potential application diagnostics humanity. Results: Forty-seven studies were summarized tabulated information about technology datasets employed, along performance accuracies. The strengths weaknesses discussed. addition, seven around world presented guide engineers scientists tools. Conclusions: Current research observed focused enhancement accuracies DL regimens, less attention being paid authenticity usefulness explanations. shortage ground truth explainability observed. Visual explanation methods found dominate; however, they might enough, more thorough professor-like explanations would required build trust professionals. Special factors legal, ethical, safety, security issues can bridge current gap between routine practice.

Язык: Английский

Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It DOI Creative Commons
Yasir Hafeez, Khuhed Memon, Maged S. Al-Quraishi

и другие.

Diagnostics, Год журнала: 2025, Номер 15(2), С. 168 - 168

Опубликована: Янв. 13, 2025

Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, medical experts have working the direction designing developing computer aided diagnosis (CAD) tools serve as assistants doctors, their large-scale adoption integration healthcare system still seems far-fetched. Diagnostic radiology is no exception. Imagining techniques like magnetic resonance imaging (MRI), computed tomography (CT), positron emission (PET) scans widely very effectively employed by radiologists neurologists for differential diagnoses neurological disorders decades, yet AI-powered systems analyze such incorporated operating procedures systems. Why? It absolutely understandable that medicine, precious human lives are on line, hence there room even tiniest mistakes. Nevertheless, with advent explainable artificial (XAI), old-school black boxes deep learning (DL) unraveled. Would XAI be turning point finally embrace AI radiology? This review a humble endeavor find answers these questions. Methods: In this review, we present journey recognize, preprocess, brain MRI various disorders, special emphasis CAD embedded explainability. A comprehensive literature from 2017 2024 was conducted using host databases. We also domain experts’ opinions summarize challenges up ahead need addressed order fully exploit tremendous potential application diagnostics humanity. Results: Forty-seven studies were summarized tabulated information about technology datasets employed, along performance accuracies. The strengths weaknesses discussed. addition, seven around world presented guide engineers scientists tools. Conclusions: Current research observed focused enhancement accuracies DL regimens, less attention being paid authenticity usefulness explanations. shortage ground truth explainability observed. Visual explanation methods found dominate; however, they might enough, more thorough professor-like explanations would required build trust professionals. Special factors legal, ethical, safety, security issues can bridge current gap between routine practice.

Язык: Английский

Процитировано

2