<p>Artificial
Neural
Networks
and
Convolutional
have
become
common
tools
for
classification
object
detection,
owing
to
their
ability
learn
features
without
prior
knowledge.
During
training,
these
networks
the
parameters,
weights,
biases.
This
paper
proposes
a
simple
Network
(CNN)
task.
Furthermore,
Bayesian
neural
network
work
is
reproduced
as
baseline
comparing
my
proposed
networks.
All
experiments
were
conducted
using
MNIST
dataset.</p>
<p>While
convolutional
adjust
parameters
based
on
cost
function
during
updates
its
backdrop
that
drives
variational
approximation
true
posterior.
Hyperparameters
such
optimizer,
learning
rate,
regularizers,
dropout,
epochs,
etc.,
varied
train
two
The
achieved
better
accuracy,
approximately
99\%,
than
previously
implemented
network.
However,
it
difficult
predict
certainty
of
predictions
made
by
networks,
unlike
learning,
which
makes
easy
do
so.
\href{https://github.com/Simeon340703/Classification_Networks}{You
can
find
code
this
at}.</p>
Sensors,
Journal Year:
2022,
Volume and Issue:
22(24), P. 9577 - 9577
Published: Dec. 7, 2022
LiDAR
is
a
commonly
used
sensor
for
autonomous
driving
to
make
accurate,
robust,
and
fast
decision-making
when
driving.
The
in
the
perception
system,
especially
object
detection,
understand
environment.
Although
2D
detection
has
succeeded
during
deep-learning
era,
lack
of
depth
information
limits
understanding
environment
location.
Three-dimensional
sensors,
such
as
LiDAR,
give
3D
about
surrounding
environment,
which
essential
system.
Despite
attention
computer
vision
community
due
multiple
applications
robotics
driving,
there
are
challenges,
scale
change,
sparsity,
uneven
distribution
data,
occlusions.
Different
representations
data
methods
minimize
effect
sparsity
have
been
proposed.
This
survey
presents
LiDAR-based
feature-extraction
techniques
data.
coordinate
systems
differ
camera
datasets
methods.
Therefore,
summarized.
Then,
state-of-the-art
object-detection
reviewed
with
selected
comparison
among
Sensors,
Journal Year:
2022,
Volume and Issue:
22(21), P. 8268 - 8268
Published: Oct. 28, 2022
Fish
species
recognition
is
crucial
to
identifying
the
abundance
of
fish
in
a
specific
area,
controlling
production
management,
and
monitoring
ecosystem,
especially
endangered
species,
which
makes
accurate
essential.
In
this
work,
problem
formulated
as
an
object
detection
model
handle
multiple
single
image,
challenging
classify
using
simple
classification
network.
The
proposed
consists
MobileNetv3-large
VGG16
backbone
networks
SSD
head.
Moreover,
class-aware
loss
function
solve
class
imbalance
our
dataset.
takes
number
instances
each
into
account
gives
more
weight
those
with
smaller
instances.
This
can
be
applied
any
or
task
imbalanced
experimental
result
on
large-scale
reef
dataset,
SEAMAPD21,
shows
that
improves
over
original
by
up
79.7%.
Pascal
VOC
dataset
also
outperforms
model.
IEEE Sensors Journal,
Journal Year:
2023,
Volume and Issue:
23(4), P. 3378 - 3394
Published: Jan. 13, 2023
An
accurate
and
robust
perception
system
is
key
to
understanding
the
driving
environment
of
autonomous
robots.
Autonomous
needs
3-D
information
about
objects,
including
object’s
location
pose,
understand
clearly.
A
camera
sensor
widely
used
in
because
its
richness
color
texture,
low
price.
The
major
problem
with
lack
information,
which
necessary
environment.
In
addition,
scale
change
occlusion
make
object
detection
more
challenging.
Many
deep
learning-based
methods,
such
as
depth
estimation,
have
been
developed
solve
information.
This
survey
presents
image
bounding
box
encoding
techniques
evaluation
metrics.
image-based
methods
are
categorized
based
on
technique
estimate
an
image’s
insights
added
each
method.
Then,
state-of-the-art
(SOTA)
monocular
stereo
camera-based
summarized.
We
also
compare
performance
selected
models
present
challenges
future
directions
detection.
Sensors,
Journal Year:
2022,
Volume and Issue:
22(21), P. 8463 - 8463
Published: Nov. 3, 2022
When
it
comes
to
some
essential
abilities
of
autonomous
ground
vehicles
(AGV),
detection
is
one
them.
In
order
safely
navigate
through
any
known
or
unknown
environment,
AGV
must
be
able
detect
important
elements
on
the
path.
Detection
applicable
both
on-road
and
off-road,
but
they
are
much
different
in
each
environment.
The
key
environment
that
identify
drivable
pathway
whether
there
obstacles
around
it.
Many
works
have
been
published
focusing
components
various
ways.
this
paper,
a
survey
most
recent
advancements
methods
intended
specifically
for
off-road
has
presented.
For
this,
we
divided
literature
into
three
major
groups:
positive
negative
obstacles.
Each
portion
further
multiple
categories
based
technology
used,
example,
single
sensor-based,
how
data
analyzed.
Furthermore,
added
critical
findings
technology,
challenges
associated
with
possible
future
directions.
Authors
believe
work
will
help
reader
finding
who
doing
similar
works.
IEEE Transactions on Intelligent Transportation Systems,
Journal Year:
2023,
Volume and Issue:
24(11), P. 11568 - 11594
Published: July 27, 2023
Nowadays,
vehicles
with
a
high
level
of
automation
are
being
driven
everywhere.
With
the
apparent
success
autonomous
driving
technology,
we
keep
working
to
achieve
fully
on
roads.
Efficient
and
accurate
vehicle
detection
is
one
essential
tasks
in
environment
perception
an
vehicle.
Therefore,
numerous
algorithms
for
have
been
developed.
However,
their
strengths
terms
performance
not
deeply
assessed
or
highlighted
yet.
This
work
comprehensively
reviews
existing
methods
datasets
considering
performances
applications
field
driving.
First,
briefly
describe
tasks,
evaluation
criteria,
public
Second,
provide
rigorous
review
both
classical
latest
methods,
including
machine
vision-based,
mmWave
radar-based,
LiDAR-based,
sensor
fusion-based
methods.
Finally,
analyze
pertinent
challenges
recommendations
future
works
concerning
detection.
The
present
covers
over
300
research
aims
help
researchers
interested
driving,
especially
Measurement Sensors,
Journal Year:
2024,
Volume and Issue:
31, P. 101025 - 101025
Published: Jan. 8, 2024
In
this
paper,
proposed
a
novel
method
to
improve
the
localisation
precision
of
identified
objects.
We
present
framework
for
iteratively
enhancing
image
region
recommendations
meet
ground
truth
values
in
research.
The
Faster
R–CNN
(FR-CNN)
seems
be
an
object
recognition
deep
convolutional
network.
It
gives
user
impression
that
network
is
cohesive
and
single.
can
provide
accurate
timely
predictions
about
whereabouts
range
first
build
unified
model
based
on
rapid
relocate
inaccurate
area
recommendations.
Because
emphasis
detection,
it
may
utilized
with
wide
datasets
compatible
various
FR-CNN
architectures.
Second,
we
focus
application
joint
score
function
variety
picture
features.
This
depicts
location
concealed
concerning
other
data
updated
structured
production
loss
are
only
two
inputs
influence
parameters
scoring
function.
join-score
iterative
context
refinement
(CIR)
used
generate
our
final
model,
which
then
classified
using
Smooth
Support
Vector
Machine
(SSVM).
measured
accuracy
mean
average
after
training
+
CIR
SSVM
low-cost
GPU
PASCAL
VOC
2012
dataset.
Our
results
3.6
%
more
exact
than
rival
learning
algorithms
average.
<p>An
accurate
and
robust
perception
system
is
key
to
understanding
the
driving
environment
of
autonomous
robots.
Autonomous
needs
3D
information
about
objects,
including
object’s
location
pose,
understand
clearly.
A
camera
sensor
widely
used
in
because
its
richness
color,
texture,
low
price.
The
major
problem
with
lack
information,
which
necessary
environment.
Additionally,
scale
change
cclusion
make
object
detection
more
challenging.
Many
deep
learning-based
methods,
such
as
depth
estimation,
have
been
developed
solve
information.
This
survey
presents
image
bounding
box
encoding
techniques,
feature
extraction
evaluation
metrics
detection.
image-based
methods
are
categorized
based
on
technique
estimate
an
image’s
insights
added
each
method.
Then,
state-of-the-art
(SOTA)
monocular
stereo
camera-based
summarized.
We
also
compare
performance
selected
models
present
challenges
future
directions
detection.</p>
Fish
species
must
be
identified
for
stock
assessments,
ecosystem
monitoring,
production
management,
and
the
conservation
of
endangered
species.
Implementing
algorithms
fish
detection
in
underwater
settings
like
Gulf
Mexico
poses
a
formidable
challenge.
Active
learning,
method
that
efficiently
identifies
informative
samples
annotation
while
staying
within
budget,
has
demonstrated
its
effectiveness
context
object
recent
times.
In
this
study,
we
present
an
active
model
designed
recognition
environments.
This
can
employed
as
system
to
effectively
lower
expense
associated
with
manual
annotation.
It
uses
epistemic
uncertainty
Evidential
Deep
Learning
(EDL)
proposes
novel
module
denoted
Model
Evidence
Head
(MEH)
employs
Hierarchical
Uncertainty
Aggregation
(HUA)
obtain
informativeness
image.
We
conducted
experiments
using
fine-grained
extensive
dataset
reef
collected
from
Mexico,
specifically
Southeast
Area
Monitoring
Assessment
Program
Dataset
2021
(SEAMAPD21).
The
experimental
results
demonstrate
framework
achieves
better
performance
on
SEAMAPD21
demonstrating
favorable
balance
between
data
efficiency
recognition.