Adversarial AI image perturbation attack invariant to object scale and type DOI

Michel L. van Lier,

Richard J. M. den Hollander,

Hugo J. Kuijf

et al.

Published: Nov. 13, 2024

Adversarial AI technologies can be used to make AI-based object detection in images malfunction. Evasion attacks perturbations the input that unnoticeable human eye and exploit weaknesses detectors prevent detection. However, evasion have themselves sensitive any apparent type, orientation, positioning, scale. This work will evaluate performance of a white-box attack its robustness for these factors.
Video data from ATR Algorithm Development Image Database is used, containing military civilian vehicles at different ranges (1000-5000 m). A (adversarial objectness gradient) was trained disrupt YOLOv3 detector previously on this dataset. Several experiments were performed assess whether successfully prevented vehicle ranges. Results show an only 1500 m range applied all other ranges, median mAP reduction >95%. Similarly, when two seven remaining vehicles, means succeed with limited training multiple vehicles. Although (perfect-knowledge) worst-case scenario which system fully compromised, inner workings are known adversary, may serve as basis research into designing AIbased resilient attacks.

Language: Английский

Malware detection for mobile computing using secure and privacy-preserving machine learning approaches: A comprehensive survey DOI Creative Commons
Faria Nawshin,

Radwa Gad,

Devrim Ünal

et al.

Computers & Electrical Engineering, Journal Year: 2024, Volume and Issue: 117, P. 109233 - 109233

Published: April 11, 2024

Mobile devices have become an essential element in our day-to-day lives. The chances of mobile attacks are rapidly increasing with the growing use devices. Exploiting vulnerabilities from as well stealing personal information, principal targets attackers. Researchers also developing various techniques for detecting and analyzing malware to overcome these issues. As new gets introduced frequently by developers, it is very challenging come up comprehensive algorithms detect this malware. There many machine-learning deep-learning been developed researchers. accuracy models largely depends on size quality training dataset. Training model a diversified dataset necessary predict accurately. However, process may raise issue privacy loss due disclosure sensitive information users. proposed mitigate issue, such differential privacy, homomorphic encryption, federated learning. This survey paper explores significance applying learning operating systems, contrasting traditional machine deep approaches detection. We delve into unique challenges opportunities architecture in-built systems their implications user security. Moreover, we assess risks associated real-life applications recommend strategies secure framework domain

Language: Английский

Citations

13

Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection DOI Creative Commons
Muhammad Imran, Annalisa Appice, Donato Malerba

et al.

Future Internet, Journal Year: 2024, Volume and Issue: 16(5), P. 168 - 168

Published: May 12, 2024

During the last decade, cybersecurity literature has conferred a high-level role to machine learning as powerful security paradigm recognise malicious software in modern anti-malware systems. However, non-negligible limitation of methods used train decision models is that adversarial attacks can easily fool them. Adversarial are attack samples produced by carefully manipulating at test time violate model integrity causing detection mistakes. In this paper, we analyse performance five realistic target-based attacks, namely Extend, Full DOS, Shift, FGSM padding + slack and GAMMA, against two models, MalConv LGBM, learned Windows Portable Executable (PE) malware files. Specifically, Convolutional Neural Network (CNN) from raw bytes PE LGBM Gradient-Boosted Decision Tree features extracted through static analysis Notably, considered study state-of-the-art broadly for tasks. addition, explore effect accounting on securing training strategy. Therefore, main contributions article follows: (1) We extend existing studies commonly consider small datasets evasion ability increasing size evaluation dataset. (2) To best our knowledge, first carry out an exploratory explain how change effective model. (3) strategy means secure files generated with methods. Hence, explains GAMMA actually be most method performed comparative analysis. On other hand, shows help recognising also explaining it changes decisions.

Language: Английский

Citations

5

Navigating AI Cybersecurity: Evolving Landscape and Challenges DOI Open Access
Maryam Roshanaei,

Mahir R. Khan,

Natalie N. Sylvester

et al.

Journal of Intelligent Learning Systems and Applications, Journal Year: 2024, Volume and Issue: 16(03), P. 155 - 174

Published: Jan. 1, 2024

Language: Английский

Citations

5

Energy-latency attacks via sponge poisoning DOI
Antonio Emanuele Cinà, Ambra Demontis, Battista Biggio

et al.

Information Sciences, Journal Year: 2025, Volume and Issue: 702, P. 121905 - 121905

Published: Jan. 30, 2025

Language: Английский

Citations

0

Backdoor learning curves: explaining backdoor poisoning beyond influence functions DOI Creative Commons
Antonio Emanuele Cinà, Kathrin Grosse, Sebastiano Vascon

et al.

International Journal of Machine Learning and Cybernetics, Journal Year: 2024, Volume and Issue: unknown

Published: Sept. 26, 2024

Abstract Backdoor attacks inject poisoning samples during training, with the goal of forcing a machine learning model to output an attacker-chosen class when presented specific trigger at test time. Although backdoor have been demonstrated in variety settings and against different models, factors affecting their effectiveness are still not well understood. In this work, we provide unifying framework study process under lens incremental influence functions. We show that depends on (i) complexity algorithm, controlled by its hyperparameters; (ii) fraction injected into training set; (iii) size visibility trigger. These affect how fast learns correlate presence target class. Our analysis unveils intriguing existence region hyperparameter space which accuracy clean is high while ineffective, thereby suggesting novel criteria improve existing defenses.

Language: Английский

Citations

0

Robustness of models addressing Information Disorder: A comprehensive review and benchmarking study DOI
Giuseppe Fenza, Vincenzo Loia, Claudio Stanzione

et al.

Neurocomputing, Journal Year: 2024, Volume and Issue: 596, P. 127951 - 127951

Published: May 31, 2024

Language: Английский

Citations

0

Measuring the risk of evasion and poisoning attacks on a traffic sign recognition system DOI
Vita Santa Barletta, Christian Catalano, Mario Colucci

et al.

Published: Nov. 11, 2024

Language: Английский

Citations

0

Enhancing Algorithmic Resilience Against Data Poisoning Using CNN DOI

J. Jayapradha,

Lakshmi Vadhanie,

Yukta Kulkarni

et al.

Advances in IT standards and standardization research (AISSR) book series/Advances in IT standards and standardization research series, Journal Year: 2024, Volume and Issue: unknown, P. 131 - 157

Published: May 1, 2024

The work aims to improve model resilience and accuracy in machine learning (ML) by addressing data poisoning attacks. Data attacks are a type of adversarial attack where malicious is injected into the training set manipulate model's output, compromising performance security. To tackle this, multi-faceted approach proposed, including assessment cleaning, detecting using outlier anomaly detection techniques. authors also train robust models techniques such as training, regularization, diversification. Additionally, they use ensemble methods that combine strengths multiple models, well Gaussian processes Bayesian optimization contribute security providing an integrated solution for advancing understanding defenses community.

Language: Английский

Citations

0

Mitigating Gradient-Based Data Poisoning Attacks on Machine Learning Models: A Statistical Detection Method DOI Open Access

Lavanya Sanapala,

Lakshmeeswari Gondi

Indian Journal of Science and Technology, Journal Year: 2024, Volume and Issue: 17(21), P. 2218 - 2231

Published: May 25, 2024

Objectives: This research paper aims to develop a novel method for identifying gradient-based data poisoning attacks on industrial applications like autonomous vehicles and intelligent healthcare systems relying machine learning deep techniques. These algorithms performs well only if they are trained good quality dataset. However, the ML models prone attacks, targeting training dataset, manipulate its input samples such that algorithm gets confused produces wrong predictions. The current detection techniques effective detect known lack generalized unknown attacks. To address this issue, integrate security elements within framework, guaranteeing identification mitigation of threats achieve detection. Methods: Filter, unique attack approach integrates ML-Filter Detection Algorithm Statistical Perturbation Bounds Identification determine given dataset is poisoned or not. DBSCAN used divide into several smaller subsets perform algorithmic analysis performance proposed evaluated in terms True positive rate significance test accuracy. Findings: probability distribution differences between original datasets vary with change perturbation size rather than use application. finding lead bounds using statistical pairwise distance metrics corresponding tests computed results. Filter demonstrates high 99.63% achieves accuracy 98% Novelty: A secured architecture ML-Filter, effectively demonstrating significant advancements detecting both utilizing algorithms. Keywords: Privacy security, Adversarial learning, Secured Architecture,

Language: Английский

Citations

0

Bagging as Defence Mechanism Against Adversarial Attack DOI

Masroor Ahmed,

Muhammad Atif Tahir

Published: Oct. 10, 2024

Language: Английский

Citations

0