Published: Nov. 13, 2024
Adversarial
AI
technologies
can
be
used
to
make
AI-based
object
detection
in
images
malfunction.
Evasion
attacks
perturbations
the
input
that
unnoticeable
human
eye
and
exploit
weaknesses
detectors
prevent
detection.
However,
evasion
have
themselves
sensitive
any
apparent
type,
orientation,
positioning,
scale.
This
work
will
evaluate
performance
of
a
white-box
attack
its
robustness
for
these
factors.
Video
data
from
ATR
Algorithm
Development
Image
Database
is
used,
containing
military
civilian
vehicles
at
different
ranges
(1000-5000
m).
A
(adversarial
objectness
gradient)
was
trained
disrupt
YOLOv3
detector
previously
on
this
dataset.
Several
experiments
were
performed
assess
whether
successfully
prevented
vehicle
ranges.
Results
show
an
only
1500
m
range
applied
all
other
ranges,
median
mAP
reduction
>95%.
Similarly,
when
two
seven
remaining
vehicles,
means
succeed
with
limited
training
multiple
vehicles.
Although
(perfect-knowledge)
worst-case
scenario
which
system
fully
compromised,
inner
workings
are
known
adversary,
may
serve
as
basis
research
into
designing
AIbased
resilient
attacks.
Language: Английский