Chinese Journal of Electronics, Год журнала: 2024, Номер 33(4), С. 1077 - 1092
Опубликована: Июль 1, 2024
Язык: Английский
Chinese Journal of Electronics, Год журнала: 2024, Номер 33(4), С. 1077 - 1092
Опубликована: Июль 1, 2024
Язык: Английский
Computer Communications, Год журнала: 2025, Номер unknown, С. 108052 - 108052
Опубликована: Янв. 1, 2025
Язык: Английский
Процитировано
1Knowledge-Based Systems, Год журнала: 2024, Номер 288, С. 111456 - 111456
Опубликована: Фев. 4, 2024
Язык: Английский
Процитировано
5Expert Systems with Applications, Год журнала: 2024, Номер 255, С. 124599 - 124599
Опубликована: Июль 3, 2024
Язык: Английский
Процитировано
5IEEE Transactions on Information Forensics and Security, Год журнала: 2023, Номер 19, С. 104 - 119
Опубликована: Сен. 7, 2023
To mitigate recent insidious backdoor attacks on deep learning models, advances have been made by the research community. Nonetheless, state-of-the-art defenses are either limited to specific (i.e., source-agnostic attacks) or non-user-friendly in that machine expertise and/or expensive computing resources required. This work observes all existing an inadvertent and inevitable intrinsic weakness, termed as non-transferability —that is, a trigger input hijacks backdoored model but is not effective another has implanted with same backdoor. With this key observation, we propose enabled detection identify inputs for model-under-test during run-time. Specifically, our allows potentially predict label input. Moreover, leverages feature extractor extract vectors group of samples randomly picked from its predicted class label, then compares similarity between extractor's latent space determine whether benign one. The can be provided reputable party free pre-trained privately reserved any open platform (e.g., ModelZoo, GitHub, Kaggle) user thus does require perform costly computations. Extensive experimental evaluations four common tasks affirm scheme high effectiveness (low false acceptance rate) usability rejection low latency against different types attacks.
Язык: Английский
Процитировано
10Information Fusion, Год журнала: 2024, Номер 105, С. 102251 - 102251
Опубликована: Янв. 11, 2024
Язык: Английский
Процитировано
42022 IEEE Symposium on Security and Privacy (SP), Год журнала: 2024, Номер 4, С. 2048 - 2066
Опубликована: Май 19, 2024
Язык: Английский
Процитировано
4IEEE Transactions on Information Forensics and Security, Год журнала: 2024, Номер 19, С. 2356 - 2369
Опубликована: Янв. 1, 2024
Deep learning models with backdoors act maliciously when triggered but seem normal otherwise. This risk, often increased by model outsourcing, challenges their secure use. Although countermeasures exist, defense against adaptive attacks is under-examined, possibly leading to security misjudgments. study the first intricate examination illustrating difficulty of detecting in outsourced models, especially attackers adjust strategies, even if capabilities are significantly limited. It relatively straightforward for circumvent detection trivially violating its threat (e.g., using advanced backdoor types or trigger designs not covered detection). However, this research highlights that various defenses can simultaneously be evaded simple under defined and limited adversary easily detectable triggers while maintaining a high attack success rate). To more specific, introduces novel methodology employs specificity enhancement training regulation symbiotic manner. approach allows us evade multiple simultaneously, including Neural Cleanse (Oakland 19'), ABS (CCS MNTD 21'). These were tools selected Evasive Trojans Track 2022 NeurIPS Trojan Detection Challenge. Even applied conjunction these stringent conditions, such as rate (> 97%) restricted use simplest (small white square), our method garnered second prize Notably, time, successfully other recent state-of-the-art defenses, FeatureRE (NeurIPS 22') Beatrix (NDSS 23'). suggests existing outsourcing remain vulnerable attacks, thus, third-party should avoided whenever possible.
Язык: Английский
Процитировано
3IEEE Access, Год журнала: 2024, Номер 12, С. 47433 - 47468
Опубликована: Янв. 1, 2024
Deep learning has significantly transformed face recognition, enabling the deployment of large-scale, state-of-the-art solutions worldwide. However, widespread adoption deep neural networks (DNNs) and rise Machine Learning as a Service emphasize need for secure DNNs. This paper revisits recognition threat model in context DNN ubiquity common practice outsourcing their training hosting to third-parties. Here, we identify backdoor attacks significant modern DNN-based systems (FRS). Backdoor involve an attacker manipulating DNN's or deployment, injecting it with stealthy malicious behavior. Once entered its inference stage, may activate compromise intended functionality. Given critical nature this FRS, our comprehensively surveys literature defenses previously demonstrated on FRS As last point, highlight potential vulnerabilities unexplored areas security.
Язык: Английский
Процитировано
32022 IEEE Symposium on Security and Privacy (SP), Год журнала: 2024, Номер 33, С. 1646 - 1664
Опубликована: Май 19, 2024
Speech recognition systems driven by Deep Neural Networks (DNNs) have revolutionized human-computer interaction through voice interfaces, which significantly facilitate our daily lives.However, the growing popularity of these also raises special concerns on their security, particularly regarding backdoor attacks.A attack inserts one or more hidden backdoors into a DNN model during its training process, such that it does not affect model's performance benign inputs, but forces to produce an adversary-desired output if specific trigger is present in input.Despite initial success current audio attacks, they suffer from following limitations: (i) Most them require sufficient knowledge, limits widespread adoption.(ii) They are stealthy enough, thus easy be detected humans.(iii) cannot live speech, reducing practicality.To address problems, this paper, we propose FlowMur, and practical can launched with limited knowledge.FlowMur constructs auxiliary dataset surrogate augment adversary knowledge.To achieve dynamicity, formulates generation as optimization problem optimizes over different attachment positions.To enhance stealthiness, adaptive data poisoning method according Signal-to-Noise Ratio (SNR).Furthermore, ambient noise incorporated process make FlowMur robust improve practicality.Extensive experiments conducted two datasets demonstrate achieves high both digital physical settings while remaining resilient state-ofthe-art defenses.In particular, human study confirms triggers generated easily participants.
Язык: Английский
Процитировано
3Опубликована: Май 4, 2025
Процитировано
0