Enhancing trustworthy deep learning for image classification against evasion attacks: a systematic literature review DOI Creative Commons

Dua’a Akhtom,

Manmeet Mahinderjit Singh, XinYing Chew

et al.

Artificial Intelligence Review, Journal Year: 2024, Volume and Issue: 57(7)

Published: June 15, 2024

Abstract In the rapidly evolving field of Deep Learning (DL), trustworthiness models is essential for their effective application in critical domains like healthcare and autonomous systems. Trustworthiness DL encompasses aspects such as reliability, fairness, transparency, which are crucial its real-world impact acceptance. However, development trustworthy faces significant challenges. This notably due to adversarial examples, a sophisticated form evasion attack machine learning (AML), subtly alter inputs deceive these pose major threat safety reliability. The current body research primarily focuses on defensive measures, enhancing robustness or implementing explainable AI techniques. this approach often neglects address fundamental vulnerabilities that adversaries exploit. As result, tends concentrate more counteracting measures rather than gaining an in-depth understanding strategies inherent gap comprehensive impedes formulation defense mechanisms. aims shift focus from predominantly toward extensive comprehension techniques innate models. We undertake by conducting thorough systematic literature review, encompassing 49 diverse studies previous decade. Our findings reveal key characteristics examples enable success against image classification-based Building insights, we propose Transferable Pretrained Adversarial framework (TPre-ADL). conceptual model rectify deficiencies incorporating analyzed traits potentially

Language: Английский

AI-assisted facial analysis in healthcare: From disease detection to comprehensive management DOI Creative Commons

Chaoyu Lei,

Kang Dang, Soon H Song

et al.

Patterns, Journal Year: 2025, Volume and Issue: 6(2), P. 101175 - 101175

Published: Feb. 1, 2025

Medical conditions and systemic diseases often manifest as distinct facial characteristics, making identification of these unique features crucial for disease screening. However, detecting using photography remains challenging because the wide variability in human conditions. The integration artificial intelligence (AI) into analysis represents a promising frontier offering user-friendly, non-invasive, cost-effective screening approach. This review explores potential AI-assisted identifying subtle phenotypes indicative health disorders. First, we outline technological framework essential effective implementation healthcare settings. Subsequently, focus on role We further expand our examination to include applications monitoring, support treatment decision-making, follow-up, thereby contributing comprehensive management. Despite its promise, adoption this technology faces several challenges, including privacy concerns, model accuracy, issues with interpretability, biases AI algorithms, adherence regulatory standards. Addressing challenges is ensure fair ethical use. By overcoming hurdles, can empower providers, improve patient care outcomes, enhance global health.

Language: Английский

Citations

3

Computer‐Aided Detection (CADe) and Segmentation Methods for Breast Cancer Using Magnetic Resonance Imaging (MRI) DOI Open Access

Payam Jannatdoust,

Parya Valizadeh, Nikoo Saeedi

et al.

Journal of Magnetic Resonance Imaging, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 9, 2025

Breast cancer continues to be a major health concern, and early detection is vital for enhancing survival rates. Magnetic resonance imaging (MRI) key tool due its substantial sensitivity invasive breast cancers. Computer‐aided (CADe) systems enhance the effectiveness of MRI by identifying potential lesions, aiding radiologists in focusing on areas interest, extracting quantitative features, integrating with computer‐aided diagnosis (CADx) pipelines. This review aims provide comprehensive overview current state CADe MRI, technical details pipelines segmentation models including classical intensity‐based methods, supervised unsupervised machine learning (ML) approaches, latest deep (DL) architectures. It highlights recent advancements from traditional algorithms sophisticated DL such as U‐Nets, emphasizing implementation multi‐parametric acquisitions. Despite these advancements, face challenges like variable false‐positive negative rates, complexity interpreting extensive data, variability system performance, lack large‐scale studies multicentric models, limiting generalizability suitability clinical implementation. Technical issues, image artefacts need reproducible explainable algorithms, remain significant hurdles. Future directions emphasize developing more robust generalizable AI improve transparency trust among clinicians, multi‐purpose systems, incorporating large language diagnostic reporting patient management. Additionally, efforts standardize streamline protocols aim increase accessibility reduce costs, optimizing use practice. Level Evidence NA Efficacy Stage 2

Language: Английский

Citations

2

A Multi-Module Explainable Artificial Intelligence Framework for Project Risk Management: Enhancing Transparency in Decision-making DOI
Bodrunnessa Badhon, Ripon K. Chakrabortty, Sreenatha G. Anavatti

et al.

Engineering Applications of Artificial Intelligence, Journal Year: 2025, Volume and Issue: 148, P. 110427 - 110427

Published: March 8, 2025

Language: Английский

Citations

2

A survey of explainable artificial intelligence in healthcare: Concepts, applications, and challenges DOI Creative Commons
Ibomoiye Domor Mienye, George Obaido, Nobert Jere

et al.

Informatics in Medicine Unlocked, Journal Year: 2024, Volume and Issue: unknown, P. 101587 - 101587

Published: Oct. 1, 2024

Language: Английский

Citations

5

XAI-BO: an architecture using Grad-CAM technique to evaluate Bayesian optimization algorithms on deep learning models DOI Creative Commons
Luyl-Da Quach, Khang Nguyen Quoc, Nguyen Thai-Nghe

et al.

Journal of Information and Telecommunication, Journal Year: 2025, Volume and Issue: unknown, P. 1 - 22

Published: Jan. 7, 2025

Language: Английский

Citations

0

Demystifying the Black Box: A Survey on Explainable Artificial Intelligence (XAI) in Bioinformatics DOI Creative Commons

Aishwarya Budhkar,

Qianqian Song, Jing Su

et al.

Computational and Structural Biotechnology Journal, Journal Year: 2025, Volume and Issue: 27, P. 346 - 359

Published: Jan. 1, 2025

The widespread adoption of Artificial Intelligence (AI) and machine learning (ML) tools across various domains has showcased their remarkable capabilities performance. Black-box AI models raise concerns about decision transparency user confidence. Therefore, explainable (XAI) explainability techniques have rapidly emerged in recent years. This paper aims to review existing works on bioinformatics, with a particular focus omics imaging. We seek analyze the growing demand for XAI identify current approaches, highlight limitations. Our survey emphasizes specific needs both bioinformatics applications users when developing methods we particularly imaging data. analysis reveals significant driven by need confidence decision-making processes. At end survey, provided practical guidelines system developers.

Language: Английский

Citations

0

GoogLeNet/DenseNet-201 to classify near-infrared (NIR) spectrum graphs for cancer diagnosis – using pretrained image networks for medical spectroscopy DOI
Tanmoy Bhattacharjee

Research Square (Research Square), Journal Year: 2025, Volume and Issue: unknown

Published: May 6, 2025

Abstract The study compares sensitivity/specificity of classification by pretrained image networks and traditional Machine Learning (ML) methods. One hundred seven spectra each benign skin conditions actinic keratosis (ACK) seborrheic (SEK), cancer basal cell carcinoma (BCC) were downloaded from a public database. Eighty per group used for training twenty-seven testing. In the first strategy, spectrum intensity values as input Linear Discriminant Analysis (LDA), K-Nearest Neighbors (KNN), Decision Tree (DT), TreeBagger, Ensemble method, Naïve Bayes, Support Vector (SVM), Artificial Neural Network (ANN). second strategy involved using graphs saved images to train GoogLeNet, Places-365 ResNet-50, Inception-V3, DenseNet-201, NasNetMobile. Strategy 2 yielded better – 0.7/ 0.91 (ACK), 0.7/0.83 (BCC), 0.63/0.85 (SEK) compared 1–0.52/0.94 0.7/0.8 0.5/0.8 (SEK). Grad-CAM mapping suggested that 1100–1200,1350–1450, 1600–1700 1/cm be responsible 2. When these regions plotted subplots 2, sensitivity BCC increases 0.78. Results suggest classify may yield results, give visual understanding basis classification, provide means improve further.

Language: Английский

Citations

0

A review of explainable AI techniques and their evaluation in mammography for breast cancer screening DOI Creative Commons

Noora Shifa,

Moutaz Saleh, Younes Akbari

et al.

Clinical Imaging, Journal Year: 2025, Volume and Issue: unknown, P. 110492 - 110492

Published: May 1, 2025

Language: Английский

Citations

0

An explainable AI-driven deep neural network for accurate breast cancer detection from histopathological and ultrasound images DOI Creative Commons

Md. Romzan Alom,

Fahmid Al Farid, Muhammad Aminur Rahaman

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: May 20, 2025

Language: Английский

Citations

0

A region-of-interest embedded graph neural architecture for gallbladder cancer detection DOI Creative Commons
Saiful Islam,

Md. Injamul Haque,

Mushrat Jahan

et al.

Results in Engineering, Journal Year: 2025, Volume and Issue: unknown, P. 104624 - 104624

Published: March 1, 2025

Language: Английский

Citations

0