Future of Electronic Healthcare Management: Blockchain and Artificial Intelligence Integration DOI
Parag Verma,

Chakka Mohana Rao,

Prudhvi Kumar Chapalamadugu

et al.

Blockchain technologies, Journal Year: 2024, Volume and Issue: unknown, P. 179 - 218

Published: Jan. 1, 2024

Language: Английский

Survey of Explainable AI Techniques in Healthcare DOI Creative Commons
Ahmad Chaddad,

Jihao Peng,

Jian Xu

et al.

Sensors, Journal Year: 2023, Volume and Issue: 23(2), P. 634 - 634

Published: Jan. 5, 2023

Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the field, any judgment or decision is fraught risk. A doctor will carefully judge whether a patient sick before forming reasonable explanation based on patient's symptoms and/or an examination. Therefore, to be viable accepted tool, AI needs mimic human interpretation skills. Specifically, explainable (XAI) aims explain information behind black-box model of that reveals how decisions are made. This paper provides survey most recent XAI techniques used related applications. We summarize categorize types, highlight algorithms increase interpretability topics. addition, we focus challenging problems applications provide guidelines develop better interpretations using concepts image text analysis. Furthermore, this future directions guide developers researchers for prospective investigations clinical topics, particularly imaging.

Language: Английский

Citations

280

Adaptive Aquila Optimizer with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging DOI Open Access
Salem Alkhalaf, Fahad Alturise, Adel A. Bahaddad

et al.

Cancers, Journal Year: 2023, Volume and Issue: 15(5), P. 1492 - 1492

Published: Feb. 27, 2023

Explainable Artificial Intelligence (XAI) is a branch of AI that mainly focuses on developing systems provide understandable and clear explanations for their decisions. In the context cancer diagnoses medical imaging, an XAI technology uses advanced image analysis methods like deep learning (DL) to make diagnosis analyze images, as well explanation how it arrived at its diagnoses. This includes highlighting specific areas system recognized indicative while also providing data fundamental algorithm decision-making process used. The objective patients doctors with better understanding system's increase transparency trust in method. Therefore, this study develops Adaptive Aquila Optimizer Enabled Cancer Diagnosis (AAOXAI-CD) technique Medical Imaging. proposed AAOXAI-CD intends accomplish effectual colorectal osteosarcoma classification process. To achieve this, initially employs Faster SqueezeNet model feature vector generation. As well, hyperparameter tuning takes place use AAO algorithm. For classification, majority weighted voting ensemble three DL classifiers, namely recurrent neural network (RNN), gated unit (GRU), bidirectional long short-term memory (BiLSTM). Furthermore, combines approach LIME explainability black-box method accurate detection. simulation evaluation methodology can be tested imaging databases, outcomes ensured auspicious outcome than other current approaches.

Language: Английский

Citations

23

From pixels to prognosis: unveiling radiomics models with SHAP and LIME for enhanced interpretability DOI Creative Commons
S. Raptis, Christos Ilioudis,

Kiriaki Theodorou

et al.

Biomedical Physics & Engineering Express, Journal Year: 2024, Volume and Issue: 10(3), P. 035016 - 035016

Published: March 18, 2024

Abstract Radiomics-based prediction models have shown promise in predicting Radiation Pneumonitis (RP), a common adverse outcome of chest irradiation. Τhis study looks into more than just RP: it also investigates bigger shift the way radiomics-based work. By integrating multi-modal radiomic data, which includes wide range variables collected from medical images including cutting-edge PET/CT imaging, we developed predictive that capture intricate nature illness progression. Radiomic features were extracted using PyRadiomics, encompassing intensity, texture, and shape measures. The high-dimensional dataset formed basis for our models, primarily Gradient Boosting Machines (GBM)—XGBoost, LightGBM, CatBoost. Performance evaluation metrics, Multi-Modal AUC-ROC, Sensitivity, Specificity, F1-Score, underscore superiority Deep Neural Network (DNN) model. DNN achieved remarkable AUC-ROC 0.90, indicating superior discriminatory power. Sensitivity specificity values 0.85 0.91, respectively, highlight its effectiveness detecting positive occurrences while accurately identifying negatives. External validation datasets, comprising retrospective patient data heterogeneous population, validate robustness generalizability models. focus is application sophisticated model interpretability methods, namely SHAP (SHapley Additive exPlanations) LIME (Local Interpretable Model-Agnostic Explanations), to improve clarity understanding predictions. These methods allow clinicians visualize effects provide localized explanations every prediction, enhancing comprehensibility This strengthens trust collaboration between computational technologies competence. integration data-driven analytics domain expertise represents significant profession, advancing us analyzing pixel-level information gaining valuable prognostic insights.

Language: Английский

Citations

14

A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging DOI Creative Commons
Mélanie Champendal, Henning Müller, John O. Prior

et al.

European Journal of Radiology, Journal Year: 2023, Volume and Issue: 169, P. 111159 - 111159

Published: Oct. 21, 2023

PurposeTo review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI).MethodA scoping was conducted following the Joanna Briggs Institute's methodology. The search performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French English after 2017 were included. Keyword combinations descriptors related to explainability, MI modalities employed. Two independent reviewers screened abstracts, titles full text, resolving differences through discussion.Results228 studies met criteria. XAI publications are increasing, targeting MRI (n=73), radiography (n=47), CT (n=46). Lung (n=82) brain (n=74) pathologies, Covid-19 (n=48), Alzheimer's disease (n=25), tumors (n=15) main pathologies explained. Explanations presented visually (n=186), numerically (n=67), rule-based (n=11), textually example-based (n=6). Commonly explained tasks include classification (n=89), prediction diagnosis (n=39), detection (n=29), segmentation (n=13), image quality improvement most frequently provided explanations local (78.1%), 5.7% global, 16.2% combined both global approaches. Post-hoc approaches predominantly used terminology varied, sometimes indistinctively using explainable (n=207), interpretable (n=187), understandable (n=112), transparent (n=61), reliable (n=31), intelligible (n=3).ConclusionThe number imaging is primarily focusing applying techniques MRI, CT, classifying predicting lung pathologies. Visual numerical output formats used. Terminology standardisation remains a challenge, as terms like "explainable" "interpretable" being indistinctively. Future development should consider user needs perspectives.

Language: Английский

Citations

22

A Comparative Study and Systematic Analysis of XAI Models and their Applications in Healthcare DOI

Jyoti Gupta,

K. R. Seeja

Archives of Computational Methods in Engineering, Journal Year: 2024, Volume and Issue: unknown

Published: April 16, 2024

Language: Английский

Citations

6

Explainable machine learning via intra-tumoral radiomics feature mapping for patient stratification in adjuvant chemotherapy for locoregionally advanced nasopharyngeal carcinoma DOI
Xinzhi Teng, Jiang Zhang, Xinyang Han

et al.

La radiologia medica, Journal Year: 2023, Volume and Issue: 128(7), P. 828 - 838

Published: June 10, 2023

Language: Английский

Citations

12

Current state and promise of user-centered design to harness explainable AI in clinical decision-support systems for patients with CNS tumors DOI Creative Commons
Eric Prince, David M. Mirsky, Todd C. Hankinson

et al.

Frontiers in Radiology, Journal Year: 2025, Volume and Issue: 4

Published: Jan. 13, 2025

In neuro-oncology, MR imaging is crucial for obtaining detailed brain images to identify neoplasms, plan treatment, guide surgical intervention, and monitor the tumor's response. Recent AI advances in neuroimaging have promising applications including guiding clinical decisions improving patient management. However, lack of clarity on how arrives at predictions has hindered its translation. Explainable (XAI) methods aim improve trustworthiness informativeness, but their success depends considering end-users' (clinicians') specific context preferences. User-Centered Design (UCD) prioritizes user needs an iterative design process, involving users throughout, providing opportunity XAI systems tailored neuro-oncology. This review focuses intersection interpretation neuro-oncology management, explainable decision support, user-centered design. We provide a resource that organizes necessary concepts, evaluation, translation, experience efficiency enhancement, improved outcomes discuss importance multi-disciplinary skills creating successful systems. also tools, embedded human-centered decision-making process different from fully automated solutions, can potentially enhance clinician performance. Following UCD principles build trust, minimize errors bias, create adaptable software promise meeting expectations healthcare professionals.

Language: Английский

Citations

0

Imaging genomics of cancer: a bibliometric analysis and review DOI Creative Commons

Xinyi Gou,

Aobo Feng,

Caizhen Feng

et al.

Cancer Imaging, Journal Year: 2025, Volume and Issue: 25(1)

Published: March 4, 2025

Imaging genomics is a burgeoning field that seeks to connections between medical imaging and genomic features. It has been widely applied explore heterogeneity predict responsiveness disease progression in cancer. This review aims assess current applications advancements of Literature on cancer was retrieved selected from PubMed, Web Science, Embase before July 2024. Detail information articles, such as systems features, were extracted analyzed. Citation Science Scopus. Additionally, bibliometric analysis the included studies conducted using Bibliometrix R package VOSviewer. A total 370 articles study. The annual growth rate 24.88%. China (133) USA (107) most productive countries. top 2 keywords plus "survival" "classification". research mainly focuses central nervous system (121) genitourinary (110, including 44 breast articles). Despite different utilizing modalities, more than half each employed radiomics Publication databases provide data support for research. development artificial intelligence algorithms, especially feature extraction model construction, significantly advanced this field. conducive enhancing related-models' interpretability. Nonetheless, challenges sample size standardization construction must overcome. And trends revealed study will guide future contribute accurate diagnosis treatment clinic.

Language: Английский

Citations

0

Unveiling Explainable AI in Healthcare: Current Trends, Challenges, and Future Directions DOI Creative Commons
A. Noor, Awais Manzoor, Muhammad Deedahwar Mazhar Qureshi

et al.

Wiley Interdisciplinary Reviews Data Mining and Knowledge Discovery, Journal Year: 2025, Volume and Issue: 15(2)

Published: May 11, 2025

ABSTRACT This overview investigates the evolution and current landscape of eXplainable Artificial Intelligence (XAI) in healthcare, highlighting its implications for researchers, technology developers, policymakers. Following PRISMA protocol, we analyzed 89 publications from January 2000 to June 2024, spanning 19 medical domains, with a focus on Neurology Cancer as most studied areas. Various data types are reviewed, including tabular data, imaging, clinical text, offering comprehensive perspective XAI applications. Key findings identify significant gaps, such limited availability public datasets, suboptimal preprocessing techniques, insufficient feature selection engineering, utilization multiple methods. Additionally, lack standardized evaluation metrics practical obstacles integrating systems into workflows emphasized. We provide actionable recommendations, design explainability‐centric models, application diverse methods, fostering interdisciplinary collaboration. These strategies aim guide researchers building robust AI assist developers creating intuitive user‐friendly tools, inform policymakers establishing effective regulations. Addressing these gaps will promote development transparent, reliable, user‐centred ultimately improving decision‐making patient outcomes.

Language: Английский

Citations

0

Explaining decisions of a light-weight deep neural network for real-time coronary artery disease classification in magnetic resonance imaging DOI Creative Commons
Talha Iqbal,

Aaleen Khalid,

Ihsan Ullah

et al.

Journal of Real-Time Image Processing, Journal Year: 2024, Volume and Issue: 21(2)

Published: Feb. 10, 2024

Abstract In certain healthcare settings, such as emergency or critical care units, where quick and accurate real-time analysis decision-making are required, the system can leverage power of artificial intelligence (AI) models to support prevent complications. This paper investigates optimization AI based on time complexity, hyper-parameter tuning, XAI for a classification task. The highlights significance lightweight convolutional neural network (CNN) analysing classifying Magnetic Resonance Imaging (MRI) in is compared with CNN-RandomForest (CNN-RF). role also examined finding optimal configurations that enhance model’s performance while efficiently utilizing limited computational resources. Finally, benefits incorporating technique (e.g. GradCAM Layer-wise Relevance Propagation) providing transparency interpretable explanations model predictions, fostering trust, error/bias detection explored. Our inference MacBook laptop 323 test images size 100x100 only 2.6 sec, which merely 8 milliseconds per image comparable accuracy ensemble CNN-RF classifiers. Using proposed model, clinicians/cardiologists achieve reliable results ensuring patients’ safety answering questions imposed by General Data Protection Regulation (GDPR). investigative study will advance understanding acceptance systems connected settings.

Language: Английский

Citations

3