Chinese Journal of Electronics, Journal Year: 2024, Volume and Issue: 33(4), P. 1077 - 1092
Published: July 1, 2024
Language: Английский
Chinese Journal of Electronics, Journal Year: 2024, Volume and Issue: 33(4), P. 1077 - 1092
Published: July 1, 2024
Language: Английский
Computer Communications, Journal Year: 2025, Volume and Issue: unknown, P. 108052 - 108052
Published: Jan. 1, 2025
Language: Английский
Citations
1Knowledge-Based Systems, Journal Year: 2024, Volume and Issue: 288, P. 111456 - 111456
Published: Feb. 4, 2024
Language: Английский
Citations
5Expert Systems with Applications, Journal Year: 2024, Volume and Issue: 255, P. 124599 - 124599
Published: July 3, 2024
Language: Английский
Citations
5IEEE Transactions on Information Forensics and Security, Journal Year: 2023, Volume and Issue: 19, P. 104 - 119
Published: Sept. 7, 2023
To mitigate recent insidious backdoor attacks on deep learning models, advances have been made by the research community. Nonetheless, state-of-the-art defenses are either limited to specific (i.e., source-agnostic attacks) or non-user-friendly in that machine expertise and/or expensive computing resources required. This work observes all existing an inadvertent and inevitable intrinsic weakness, termed as non-transferability —that is, a trigger input hijacks backdoored model but is not effective another has implanted with same backdoor. With this key observation, we propose enabled detection identify inputs for model-under-test during run-time. Specifically, our allows potentially predict label input. Moreover, leverages feature extractor extract vectors group of samples randomly picked from its predicted class label, then compares similarity between extractor's latent space determine whether benign one. The can be provided reputable party free pre-trained privately reserved any open platform (e.g., ModelZoo, GitHub, Kaggle) user thus does require perform costly computations. Extensive experimental evaluations four common tasks affirm scheme high effectiveness (low false acceptance rate) usability rejection low latency against different types attacks.
Language: Английский
Citations
10Information Fusion, Journal Year: 2024, Volume and Issue: 105, P. 102251 - 102251
Published: Jan. 11, 2024
Language: Английский
Citations
42022 IEEE Symposium on Security and Privacy (SP), Journal Year: 2024, Volume and Issue: 4, P. 2048 - 2066
Published: May 19, 2024
Language: Английский
Citations
4IEEE Transactions on Information Forensics and Security, Journal Year: 2024, Volume and Issue: 19, P. 2356 - 2369
Published: Jan. 1, 2024
Deep learning models with backdoors act maliciously when triggered but seem normal otherwise. This risk, often increased by model outsourcing, challenges their secure use. Although countermeasures exist, defense against adaptive attacks is under-examined, possibly leading to security misjudgments. study the first intricate examination illustrating difficulty of detecting in outsourced models, especially attackers adjust strategies, even if capabilities are significantly limited. It relatively straightforward for circumvent detection trivially violating its threat (e.g., using advanced backdoor types or trigger designs not covered detection). However, this research highlights that various defenses can simultaneously be evaded simple under defined and limited adversary easily detectable triggers while maintaining a high attack success rate). To more specific, introduces novel methodology employs specificity enhancement training regulation symbiotic manner. approach allows us evade multiple simultaneously, including Neural Cleanse (Oakland 19'), ABS (CCS MNTD 21'). These were tools selected Evasive Trojans Track 2022 NeurIPS Trojan Detection Challenge. Even applied conjunction these stringent conditions, such as rate (> 97%) restricted use simplest (small white square), our method garnered second prize Notably, time, successfully other recent state-of-the-art defenses, FeatureRE (NeurIPS 22') Beatrix (NDSS 23'). suggests existing outsourcing remain vulnerable attacks, thus, third-party should avoided whenever possible.
Language: Английский
Citations
3IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 47433 - 47468
Published: Jan. 1, 2024
Deep learning has significantly transformed face recognition, enabling the deployment of large-scale, state-of-the-art solutions worldwide. However, widespread adoption deep neural networks (DNNs) and rise Machine Learning as a Service emphasize need for secure DNNs. This paper revisits recognition threat model in context DNN ubiquity common practice outsourcing their training hosting to third-parties. Here, we identify backdoor attacks significant modern DNN-based systems (FRS). Backdoor involve an attacker manipulating DNN's or deployment, injecting it with stealthy malicious behavior. Once entered its inference stage, may activate compromise intended functionality. Given critical nature this FRS, our comprehensively surveys literature defenses previously demonstrated on FRS As last point, highlight potential vulnerabilities unexplored areas security.
Language: Английский
Citations
32022 IEEE Symposium on Security and Privacy (SP), Journal Year: 2024, Volume and Issue: 33, P. 1646 - 1664
Published: May 19, 2024
Speech recognition systems driven by Deep Neural Networks (DNNs) have revolutionized human-computer interaction through voice interfaces, which significantly facilitate our daily lives.However, the growing popularity of these also raises special concerns on their security, particularly regarding backdoor attacks.A attack inserts one or more hidden backdoors into a DNN model during its training process, such that it does not affect model's performance benign inputs, but forces to produce an adversary-desired output if specific trigger is present in input.Despite initial success current audio attacks, they suffer from following limitations: (i) Most them require sufficient knowledge, limits widespread adoption.(ii) They are stealthy enough, thus easy be detected humans.(iii) cannot live speech, reducing practicality.To address problems, this paper, we propose FlowMur, and practical can launched with limited knowledge.FlowMur constructs auxiliary dataset surrogate augment adversary knowledge.To achieve dynamicity, formulates generation as optimization problem optimizes over different attachment positions.To enhance stealthiness, adaptive data poisoning method according Signal-to-Noise Ratio (SNR).Furthermore, ambient noise incorporated process make FlowMur robust improve practicality.Extensive experiments conducted two datasets demonstrate achieves high both digital physical settings while remaining resilient state-ofthe-art defenses.In particular, human study confirms triggers generated easily participants.
Language: Английский
Citations
3Published: May 4, 2025
Citations
0