Trojan Attacks and Countermeasures on Deep Neural Networks from Life-Cycle Perspective: A Review DOI Creative Commons
Lingxin Jin, Xiangyu Wen, Wei Jiang

et al.

ACM Computing Surveys, Journal Year: 2025, Volume and Issue: unknown

Published: March 31, 2025

Deep Neural Networks (DNNs) have been widely deployed in security-critical artificial intelligence systems, such as autonomous driving and facial recognition systems. However, recent research has revealed their susceptibility to Trojan information maliciously injected by attackers. This vulnerability is caused, on the one hand, complex architecture non-interpretability of DNNs. On other external open-source datasets, pre-trained models, intelligent service platforms further exacerbate threat attacks. article presents first comprehensive survey attacks against DNNs from a lifecycle perspective, including training, post-training, inference (deployment) stages. Specifically, this reformulates relationships with poisoning attacks, adversarial example bit-flip Then, newly emerged model architectures (e.g., vision transformers spiking neural networks) fields investigated. Moreover, also provides review countermeasures (including detection elimination) Further, it evaluates practical effectiveness existing defense strategies at different Finally, we conclude provide constructive insights advance countermeasures.

Language: Английский

A Comprehensive Survey on Backdoor Attacks and Their Defenses in Face Recognition Systems DOI Creative Commons
Quentin Le Roux,

Eric Bourbao,

Yannick Teglia

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 47433 - 47468

Published: Jan. 1, 2024

Deep learning has significantly transformed face recognition, enabling the deployment of large-scale, state-of-the-art solutions worldwide. However, widespread adoption deep neural networks (DNNs) and rise Machine Learning as a Service emphasize need for secure DNNs. This paper revisits recognition threat model in context DNN ubiquity common practice outsourcing their training hosting to third-parties. Here, we identify backdoor attacks significant modern DNN-based systems (FRS). Backdoor involve an attacker manipulating DNN's or deployment, injecting it with stealthy malicious behavior. Once entered its inference stage, may activate compromise intended functionality. Given critical nature this FRS, our comprehensively surveys literature defenses previously demonstrated on FRS As last point, highlight potential vulnerabilities unexplored areas security.

Language: Английский

Citations

3

A Closer Look at Robustness of Vision Transformers to Backdoor Attacks DOI

Akshayvarun Subramanya,

Soroush Abbasi Koohpayegani, Aniruddha Saha

et al.

2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Journal Year: 2024, Volume and Issue: unknown, P. 3862 - 3871

Published: Jan. 3, 2024

Transformer architectures are based on self-attention mechanism that processes images as a sequence of patches. As their design is quite different compared to CNNs, it important take closer look at vulnerability back-door attacks and how transformer affect robustness. Backdoor happen when an attacker poisons small part the training with specific trigger or backdoor which will be activated later. The model performance good clean test images, but can manipulate decision by showing image time. In this paper, we compare state-of-the-art through lens attacks, specifically attention mechanisms We observe well known vision architecture (ViT) least robust ResMLP, belongs class called Feed Forward Networks (FFN), most among architectures. also find intriguing difference between transformers CNNs - interpretation algorithms effectively highlight for not CNNs. Based observation, test-time blocking defense reduces attack success rate large margin transformers. show such incorporated during process improve robustness even further. believe our experimental findings encourage community understand building block components in developing novel attacks. Code available here: https://github.com/UCDvision/backdoor_transformer.git

Language: Английский

Citations

3

Lotus: Evasive and Resilient Backdoor Attacks through Sub-Partitioning DOI
Siyuan Cheng, Guanhong Tao, Yingqi Liu

et al.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Journal Year: 2024, Volume and Issue: unknown, P. 24798 - 24809

Published: June 16, 2024

Language: Английский

Citations

3

Enhancing robustness of backdoor attacks against backdoor defenses DOI
Bin Hu, Kehua Guo, Sheng Ren

et al.

Expert Systems with Applications, Journal Year: 2025, Volume and Issue: unknown, P. 126355 - 126355

Published: Jan. 1, 2025

Language: Английский

Citations

0

Trojan Attacks and Countermeasures on Deep Neural Networks from Life-Cycle Perspective: A Review DOI Creative Commons
Lingxin Jin, Xiangyu Wen, Wei Jiang

et al.

ACM Computing Surveys, Journal Year: 2025, Volume and Issue: unknown

Published: March 31, 2025

Deep Neural Networks (DNNs) have been widely deployed in security-critical artificial intelligence systems, such as autonomous driving and facial recognition systems. However, recent research has revealed their susceptibility to Trojan information maliciously injected by attackers. This vulnerability is caused, on the one hand, complex architecture non-interpretability of DNNs. On other external open-source datasets, pre-trained models, intelligent service platforms further exacerbate threat attacks. article presents first comprehensive survey attacks against DNNs from a lifecycle perspective, including training, post-training, inference (deployment) stages. Specifically, this reformulates relationships with poisoning attacks, adversarial example bit-flip Then, newly emerged model architectures (e.g., vision transformers spiking neural networks) fields investigated. Moreover, also provides review countermeasures (including detection elimination) Further, it evaluates practical effectiveness existing defense strategies at different Finally, we conclude provide constructive insights advance countermeasures.

Language: Английский

Citations

0