International Journal of Information Security, Journal Year: 2025, Volume and Issue: 24(2)
Published: Feb. 16, 2025
Language: Английский
International Journal of Information Security, Journal Year: 2025, Volume and Issue: 24(2)
Published: Feb. 16, 2025
Language: Английский
Published: Jan. 2, 2024
This NIST AI report develops a taxonomy of concepts and defines terminology in the field adversarial machine learning (AML). The is built on survey AML literature arranged conceptual hierarchy that includes key types ML methods lifecycle stage attack, attacker goals objectives, capabilities knowledge process. also provides corresponding for mitigating managing consequences attacks points out relevant open challenges to take into account systems. used consistent with complemented by glossary terms associated security systems intended assist non-expert readers. Taken together, are meant inform other standards future practice guides assessing systems, establishing common language understanding rapidly developing landscape.
Language: Английский
Citations
46Information Fusion, Journal Year: 2024, Volume and Issue: 107, P. 102303 - 102303
Published: Feb. 19, 2024
Language: Английский
Citations
46Internet of Things and Cyber-Physical Systems, Journal Year: 2025, Volume and Issue: unknown
Published: Feb. 1, 2025
Language: Английский
Citations
4Internet of Things and Cyber-Physical Systems, Journal Year: 2023, Volume and Issue: 3, P. 155 - 179
Published: Jan. 1, 2023
Federated Learning (FL, or Collaborative (CL)) has surely gained a reputation for not only building Machine (ML) models that rely on distributed datasets, but also starting to play key role in security and privacy solutions protect sensitive data information from variety of ML-related attacks. This made it an ideal choice emerging networks such as Internet Things (IoT) systems, especially with its state-of-the-art algorithms focus their practical use over IoT networks, despite the presence resource-constrained devices. However, heterogeneous nature current devices complex seriously hindered FL training process's ability perform well. Thus, rendering almost unsuitable direct deployment ongoing efforts tackle this issue overcome challenging obstacle. As result, main characteristics both aspects are presented study. We broaden our research investigate analyze cutting-edge algorithms, models, protocols, efficacy application across systems alike. is followed by comparative analysis recently available protection can be based cryptographic non-cryptographic heterogeneous, dynamic networks. Moreover, proposed work provides list suggestions recommendations applied enhance effectiveness adoption achieve higher robustness against attacks,
Language: Английский
Citations
31Computer, Journal Year: 2024, Volume and Issue: 57(3), P. 26 - 34
Published: March 1, 2024
Poisoning attacks compromise the training data utilized to train machine learning (ML) models, diminishing their overall performance, manipulating predictions on specific test samples, and implanting backdoors. This article thoughtfully explores these while discussing strategies mitigate them through fundamental security principles or by implementing defensive mechanisms tailored for ML.
Language: Английский
Citations
11IEEE Transactions on Information Forensics and Security, Journal Year: 2023, Volume and Issue: 18, P. 1749 - 1762
Published: Jan. 1, 2023
Despite the large body of academic work on machine learning security, little is known about occurrence attacks systems in wild. In this paper, we report a quantitative study with 139 industrial practitioners. We analyze attack and concern evaluate statistical hypotheses factors influencing threat perception exposure. Our results shed light real-world deployed learning. On organizational level, while find no predictors for exposure our sample, amount implement defenses depends to threats or expected likelihood become target. also provide detailed analysis practitioners' replies relevance individual attacks, unveiling complex concerns like unreliable decision making, business information leakage, bias introduction into models. Finally, that prior knowledge security influences perception. paves way more research adversarial practice, but yields insights regulation auditing.
Language: Английский
Citations
19Published: June 25, 2024
Emergencies and critical incidents often unfold rapidly, necessitating a swift effective response.In this research, we introduce novel approach to identify classify emergency situations from social media posts direct messages using an open source Large Language Model, LLAMA2.The goal is harness the power of natural language processing machine learning assist public safety telecommunicators huge crowds during countrywide emergencies.Our research focuses on developing model that can understand users describe their situation in 911 call, enabling LLAMA2 analyze content offer relevant instructions telecommunicator, while also creating workflows notify government agencies with caller's information when necessary.Another benefit provides its ability people significant incident system overwhelmed, by assisting simple informing authorities location information.
Language: Английский
Citations
9IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 61022 - 61035
Published: Jan. 1, 2024
This article examines the interplay between artificial intelligence (AI) and cybersecurity in light of future regulatory requirements on security AI systems, specifically focusing robustness high-risk systems against cyberattacks context European Union's Act. The paper identifies analyses three challenges to achieve compliance with requirement: accounting for diversity complexity technologies, assessing AI-specific risks, developing secure-by-design systems. contribution consists providing an overview practices identifying gaps current approaches conformity assessment Our analysis highlights unique vulnerabilities present absence established tailored these emphasises need continuous alignment legal technological capabilities, acknowledging necessity further research development address challenges. It concludes that comprehensive must evolve accommodate aspects AI, a collaborative effort from various sectors ensure effective implementation standardisation.
Language: Английский
Citations
7Computers & Security, Journal Year: 2025, Volume and Issue: unknown, P. 104468 - 104468
Published: April 1, 2025
Language: Английский
Citations
1IEEE Transactions on Information Forensics and Security, Journal Year: 2023, Volume and Issue: 19, P. 777 - 792
Published: Oct. 16, 2023
In vehicular ad-hoc networks (VANET), federated learning enables vehicles to collaboratively train a global model for intelligent transportation without sharing their local data. However, due dynamic network structure and unreliable wireless communication of VANET, various potential risks (e.g., identity privacy leakage, data inference, integrity compromise, manipulation) undermine the trustworthiness intermediate parameters necessary building model. While existing cryptography techniques differential provide provable security paradigms, practicality secure in VANET is hindered terms training efficiency performance. Therefore, developing efficient remains challenge. this work, we propose privacy-enhanced authentication protocol called FedComm. Unlike solutions, FedComm addresses above challenge through user anonymity. First, participate with unlinkable pseudonyms, ensuring both preservation collaboration. Second, incorporates an guarantee authenticity originated from anonymous vehicles. Finally, accurately identifies completely eliminates malicious communication. Security analysis verification ProVerif demonstrate that enhances reliability parameters. Experimental results show reduces overhead proof generation by 67.38% 67.39%, respectively, compared state-of-the-art protocols used learning.
Language: Английский
Citations
17