Adversarial Training for Mitigating Insider-Driven XAI-Based Backdoor Attacks DOI Creative Commons

R. G. Gayathri,

Atul Sajjanhar, Yang Xiang

et al.

Future Internet, Journal Year: 2025, Volume and Issue: 17(5), P. 209 - 209

Published: May 6, 2025

The study investigates how adversarial training techniques can be used to introduce backdoors into deep learning models by an insider with privileged access data. research demonstrates insider-driven poison-label backdoor approach in which triggers are introduced the dataset. These misclassify poisoned inputs while maintaining standard classification on clean An adversary improve stealth and effectiveness of such attacks utilizing XAI techniques, makes detection more difficult. uses publicly available datasets evaluate robustness this situation. Our experiments show that considerably reduces attacks. results verified using various performance metrics, revealing model vulnerabilities possible countermeasures. findings demonstrate importance robust effective defenses security against

Language: Английский

AInsectID Version 1.1: An Insect Species Identification Software Based on the Transfer Learning of Deep Convolutional Neural Networks DOI Creative Commons
Haleema Sadia, Parvez Alam

Published: March 25, 2025

AInsectID Version 1.1 is a Graphical User Interface (GUI)‐operable open‐source insect species identification, color processing, and image analysis software. The software has current database of 150 insects integrates artificial intelligence approaches to streamline the process with focus on addressing prediction challenges posed by mimics. This paper presents methods algorithmic development, coupled rigorous machine training used enable high levels validation accuracy. Our work transfer learning prominent convolutional neural network (CNN) architectures, including VGG16, GoogLeNet, InceptionV3, MobileNetV2, ResNet50, ResNet101. Here, we employ both fine tuning hyperparameter optimization improve performance. After extensive computational experimentation, ResNet101 evidenced as being most effective CNN model, achieving accuracy 99.65%. dataset utilized for sourced from National Museum Scotland, Natural History London, open source datasets Zenodo (CERN's Data Center), ensuring diverse comprehensive collection species.

Language: Английский

Citations

0

Adversarial Training for Mitigating Insider-Driven XAI-Based Backdoor Attacks DOI Creative Commons

R. G. Gayathri,

Atul Sajjanhar, Yang Xiang

et al.

Future Internet, Journal Year: 2025, Volume and Issue: 17(5), P. 209 - 209

Published: May 6, 2025

The study investigates how adversarial training techniques can be used to introduce backdoors into deep learning models by an insider with privileged access data. research demonstrates insider-driven poison-label backdoor approach in which triggers are introduced the dataset. These misclassify poisoned inputs while maintaining standard classification on clean An adversary improve stealth and effectiveness of such attacks utilizing XAI techniques, makes detection more difficult. uses publicly available datasets evaluate robustness this situation. Our experiments show that considerably reduces attacks. results verified using various performance metrics, revealing model vulnerabilities possible countermeasures. findings demonstrate importance robust effective defenses security against

Language: Английский

Citations

0