None DOI Open Access
Loai Abudaqa,

Nabil Al-Swari,

Shady Hegazi

et al.

Journal of Medicinal and Chemical Sciences, Journal Year: 2023, Volume and Issue: 6(9)

Published: May 8, 2023

A blockage of the blood vessels feeding area causes ischemia, which is defined as a localized absence flow. If an organ not getting enough oxygen and flow, such heart, or brain it said to be ischemic. To describe progress made in detection, characterization, prediction cardiac ischemia using Machine Learning (ML)-based Artificial Intelligence (AI) processes including together Single Photon Emission Computed Tomography (SPECT) Positron (PET). In relatively recent past, use machine learning algorithms cardiology has increasingly centered on image processing for goals diagnosis, prognosis, type identification analysis. The main objective this study was improve Nuclear Cardiology (NC) images patients Image Processing techniques. Clinical research being significantly changed by AI application. Through examination very big datasets convergence potent ML rising computer capacity, been shown that experimental categorization well may improved through examining extremely high-dimensional non-linear features. improving perfusion abnormalities myocardial predicting adverse cardiovascular events at patient level.

Language: Английский

A systematic review of Explainable Artificial Intelligence models and applications: Recent developments and future trends DOI Creative Commons

A. Saranya,

R. Subhashini

Decision Analytics Journal, Journal Year: 2023, Volume and Issue: 7, P. 100230 - 100230

Published: April 17, 2023

Artificial Intelligence (AI) uses systems and machines to simulate human intelligence solve common real-world problems. Machine learning deep are technologies that use algorithms predict outcomes more accurately without relying on intervention. However, the opaque black box model cumulative complexity can be used achieve. Explainable (XAI) is a term refers provide explanations for their decision or predictions users. XAI aims increase transparency, trustworthiness accountability of AI system, especially when they high-stakes application such as healthcare, finance security. This paper offers systematic literature review approaches with different observes 91 recently published articles describing development applications in manufacturing, transportation, finance. We investigated Scopus, Web Science, IEEE Xplore PubMed databases, find pertinent publications between January 2018 October 2022. It contains research modelling were retrieved from scholarly databases using keyword searches. think our extends by working roadmap further field.

Language: Английский

Citations

206

Explainable AI approaches in deep learning: Advancements, applications and challenges DOI
Md. Tanzib Hosain,

Jamin Rahman Jim,

M. F. Mridha

et al.

Computers & Electrical Engineering, Journal Year: 2024, Volume and Issue: 117, P. 109246 - 109246

Published: April 26, 2024

Language: Английский

Citations

27

Unlocking the black box: an in-depth review on interpretability, explainability, and reliability in deep learning DOI
Emrullah Şahin, Naciye Nur Arslan, Durmuş Özdemir

et al.

Neural Computing and Applications, Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 18, 2024

Language: Английский

Citations

11

Artificial intelligence-powered coronary artery disease diagnosis from SPECT myocardial perfusion imaging: a comprehensive deep learning study DOI Creative Commons
Ghasem Hajianfar,

Omid Gharibi,

Maziar Sabouri

et al.

European Journal of Nuclear Medicine and Molecular Imaging, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 20, 2025

Abstract Background Myocardial perfusion imaging (MPI) using single-photon emission computed tomography (SPECT) is a well-established modality for noninvasive diagnostic assessment of coronary artery disease (CAD). However, the time-consuming and experience-dependent visual interpretation SPECT images remains limitation in clinic. Purpose We aimed to develop advanced models diagnose CAD different supervised semi-supervised deep learning (DL) algorithms training strategies, including transfer data augmentation, with SPECT-MPI invasive angiography (ICA) as standard reference. Materials methods A total 940 patients who underwent were enrolled (281 included ICA). Quantitative (QPS) was used extract polar maps rest stress states. defined two tasks, (1) Automated diagnosis expert reader (ER) reference, (2) from based on reference ICA reports. In task 2, we 6 strategies DL models. implemented 13 along 4 input types without augmentation (WAug WoAug) train, validate, test (728 models). One hundred (the same 1) evaluate per vessel patient. Metrics, such area under receiver operating characteristics curve (AUC), accuracy, sensitivity, specificity, precision, balanced accuracy reported. DeLong pairwise Wilcoxon rank sum tests respectively compare after 1000 bootstraps all also compared performance our best model ER’s diagnosis. Results 1, DenseNet201 Late Fusion (AUC = 0.89) ResNet152V2 0.83) outperformed other per-vessel per-patient analyses, respectively. prediction Strategy 3 (a combination ER- ICA-based train data), WoAug InceptionResNetV2 EarlyFusion 0.71), 5 (semi-supervised approach) 0.77) Moreover, saliency showed that could be helpful focusing relevant spots decision making. Conclusion Our study confirmed potential DL-based analysis automation ER-based diagnosis, models’ promising showing close expert-level analysis. It demonstrated combination, those ICA, methods, like learning, can increase The proposed coupled computer-aided systems an assistant nuclear medicine physicians improve their reporting, but only LAD territory. Clinical trial number Not applicable.

Language: Английский

Citations

1

A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging DOI Creative Commons
Mélanie Champendal, Henning Müller, John O. Prior

et al.

European Journal of Radiology, Journal Year: 2023, Volume and Issue: 169, P. 111159 - 111159

Published: Oct. 21, 2023

PurposeTo review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI).MethodA scoping was conducted following the Joanna Briggs Institute's methodology. The search performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French English after 2017 were included. Keyword combinations descriptors related to explainability, MI modalities employed. Two independent reviewers screened abstracts, titles full text, resolving differences through discussion.Results228 studies met criteria. XAI publications are increasing, targeting MRI (n=73), radiography (n=47), CT (n=46). Lung (n=82) brain (n=74) pathologies, Covid-19 (n=48), Alzheimer's disease (n=25), tumors (n=15) main pathologies explained. Explanations presented visually (n=186), numerically (n=67), rule-based (n=11), textually example-based (n=6). Commonly explained tasks include classification (n=89), prediction diagnosis (n=39), detection (n=29), segmentation (n=13), image quality improvement most frequently provided explanations local (78.1%), 5.7% global, 16.2% combined both global approaches. Post-hoc approaches predominantly used terminology varied, sometimes indistinctively using explainable (n=207), interpretable (n=187), understandable (n=112), transparent (n=61), reliable (n=31), intelligible (n=3).ConclusionThe number imaging is primarily focusing applying techniques MRI, CT, classifying predicting lung pathologies. Visual numerical output formats used. Terminology standardisation remains a challenge, as terms like "explainable" "interpretable" being indistinctively. Future development should consider user needs perspectives.

Language: Английский

Citations

22

An explainable transfer learning framework for multi-classification of lung diseases in chest X-rays DOI Creative Commons

Aryan Nikul Patel,

Ramalingam Murugan, Gautam Srivastava

et al.

Alexandria Engineering Journal, Journal Year: 2024, Volume and Issue: 98, P. 328 - 343

Published: May 7, 2024

In the field of medical imaging, increasing demand for advanced computer-aided diagnosis systems is crucial in radiography. Accurate identification various diseases, such as COVID-19, pneumonia, tuberculosis, and pulmonary lung nodules, holds vital significance. Despite substantial progress field, a persistent research gap necessitates development models that excel precision provide transparency decision-making processes. order to address this issue, work introduces an approach utilizes transfer learning through EfficientNet-B4 architecture, leveraging pre-trained model enhance classification performance on comprehensive dataset X-rays. The integration explainable artificial intelligence (XAI), specifically emphasizing Grad-CAM, contributes interpretability by providing insights into neural network's process, elucidating salient features activation regions influencing multi-disease classifications. result robust system achieving impressive 96% accuracy, accompanied visualizations highlighting critical X-ray images. This investigation not only advances progression but also sets pioneering benchmark dependable transparent diagnostic disease identification.

Language: Английский

Citations

8

Explainable Artificial Intelligence (XAI) in healthcare: Interpretable Models for Clinical Decision Support DOI

Nitin Rane,

Saurabh Choudhary,

Jayesh Rane

et al.

SSRN Electronic Journal, Journal Year: 2023, Volume and Issue: unknown

Published: Jan. 1, 2023

In healthcare, the incorporation of Artificial Intelligence (AI) plays a pivotal role in enhancing diagnostic precision and guiding treatment decisions. Nevertheless, lack transparency conventional AI models poses challenges gaining trust clinicians comprehending rationale behind their This research paper explores Explainable (XAI) its application with specific focus on transparent designed for clinical decision support various medical disciplines. The initiates by underscoring crucial requirement interpretability systems within healthcare realm. Recognizing diverse nature specialties, study investigates tailored XAI approaches to meet distinctive needs areas such as radiology, pathology, cardiology, oncology. Through thorough review existing literature analysis, identifies key obstacles prospects implementing across varied contexts. field cornerstone imaging, proves beneficial elucidating decision-making procedures image analysis algorithms. probes into impact interpretable radiological diagnoses, examining how can seamlessly integrate AI-generated insights workflows. Within where is utmost importance, clarifies enhance histopathological assessments. By demystifying intricacies AI-driven pathology models, aims empower pathologists leverage these tools more accurate diagnoses. Cardiology, characterized complex interplay physiological parameters, benefits from offering intelligible explanations cardiovascular risk predictions recommendations. delves highlighting potential systems. Moreover, oncology, decisions hinge precise identification characterization tumors, aids unraveling intricate machine learning models. This, turn, fosters among oncologists utilizing personalized strategies.

Language: Английский

Citations

15

Attention guided grad-CAM : an improved explainable artificial intelligence model for infrared breast cancer detection DOI
Kaushik Raghavan,

B. Sivaselvan,

V. Kamakoti

et al.

Multimedia Tools and Applications, Journal Year: 2023, Volume and Issue: 83(19), P. 57551 - 57578

Published: Dec. 15, 2023

Language: Английский

Citations

15

Cervical Spine Fracture Detection and Classification Using Two-Stage Deep Learning Methodology DOI Creative Commons
Muhammad Yaseen, Maisam Ali, Sikandar Ali

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 72131 - 72142

Published: Jan. 1, 2024

Cervical spine fractures are a medical emergency that can cause permanent paralysis and even death. Traditional fracture detection techniques, such as manual radiography image interpretation, time-consuming prone to human error. Deep learning algorithms have shown promising results in various imaging applications i.e., disease diagnosis, including of bones. In this study, we propose two-stage approach for detecting cervical fractures. The first stage employs convolutional neural network (CNN) model determine the presence or absence spine, using dataset Computed Tomography (CT) scan images well Grad-CAM enhanced visualization interpretation. second stage, our focus shifts specific vertebrae within spine. To accomplish task, trained evaluated performance YOLOv5 YOLOv8 models with 9170 consisting seven vertebrae. both YOLO versions compared evaluated. precision, recall, mAP50, mAP50-90 were 0.900, 0.890, 0.935, 0.872, respectively. research demonstrate potential deep learning-based approaches detection. By automating process, these assist radiologists healthcare professionals making accurate timely diagnoses, leading improved patient outcomes.

Language: Английский

Citations

5

Explainable artificial intelligence for medical imaging: Review and experiments with infrared breast images DOI
Kaushik Raghavan, S. Balasubramanian,

V. Kamakoti

et al.

Computational Intelligence, Journal Year: 2024, Volume and Issue: 40(3)

Published: June 1, 2024

Abstract There is a growing trend of using artificial intelligence, particularly deep learning algorithms, in medical diagnostics, revolutionizing healthcare by improving efficiency, accuracy, and patient outcomes. However, the use intelligence diagnostics comes with critical need to explain reasoning behind intelligence‐based predictions ensure transparency decision‐making. Explainable has emerged as crucial research area address for interpretability diagnostics. techniques aim provide insights into decision‐making process systems, enabling clinicians understand factors algorithms consider reaching their predictions. This paper presents detailed review saliency‐based (visual) methods, such class activation which have gained popularity imaging they visual explanations highlighting regions an image most influential intelligence's decision. We also present literature on non‐visual but focus will be methods. existing experiment infrared breast images detecting cancer. Towards end this paper, we propose “attention guided Grad‐CAM” that enhances visualizations explainable intelligence. The shows are not explored context opens up wide range opportunities further make clinical thermography assistive technology community.

Language: Английский

Citations

4