Fish
species
must
be
identified
for
stock
assessments,
ecosystem
monitoring,
production
management,
and
the
conservation
of
endangered
species.
Implementing
algorithms
fish
detection
in
underwater
settings
like
Gulf
Mexico
poses
a
formidable
challenge.
Active
learning,
method
that
efficiently
identifies
informative
samples
annotation
while
staying
within
budget,
has
demonstrated
its
effectiveness
context
object
recent
times.
In
this
study,
we
present
an
active
model
designed
recognition
environments.
This
can
employed
as
system
to
effectively
lower
expense
associated
with
manual
annotation.
It
uses
epistemic
uncertainty
Evidential
Deep
Learning
(EDL)
proposes
novel
module
denoted
Model
Evidence
Head
(MEH)
employs
Hierarchical
Uncertainty
Aggregation
(HUA)
obtain
informativeness
image.
We
conducted
experiments
using
fine-grained
extensive
dataset
reef
collected
from
Mexico,
specifically
Southeast
Area
Monitoring
Assessment
Program
Dataset
2021
(SEAMAPD21).
The
experimental
results
demonstrate
framework
achieves
better
performance
on
SEAMAPD21
demonstrating
favorable
balance
between
data
efficiency
recognition.
Sensors,
Journal Year:
2022,
Volume and Issue:
22(24), P. 9577 - 9577
Published: Dec. 7, 2022
LiDAR
is
a
commonly
used
sensor
for
autonomous
driving
to
make
accurate,
robust,
and
fast
decision-making
when
driving.
The
in
the
perception
system,
especially
object
detection,
understand
environment.
Although
2D
detection
has
succeeded
during
deep-learning
era,
lack
of
depth
information
limits
understanding
environment
location.
Three-dimensional
sensors,
such
as
LiDAR,
give
3D
about
surrounding
environment,
which
essential
system.
Despite
attention
computer
vision
community
due
multiple
applications
robotics
driving,
there
are
challenges,
scale
change,
sparsity,
uneven
distribution
data,
occlusions.
Different
representations
data
methods
minimize
effect
sparsity
have
been
proposed.
This
survey
presents
LiDAR-based
feature-extraction
techniques
data.
coordinate
systems
differ
camera
datasets
methods.
Therefore,
summarized.
Then,
state-of-the-art
object-detection
reviewed
with
selected
comparison
among
IEEE Sensors Journal,
Journal Year:
2023,
Volume and Issue:
23(4), P. 3378 - 3394
Published: Jan. 13, 2023
An
accurate
and
robust
perception
system
is
key
to
understanding
the
driving
environment
of
autonomous
robots.
Autonomous
needs
3-D
information
about
objects,
including
object’s
location
pose,
understand
clearly.
A
camera
sensor
widely
used
in
because
its
richness
color
texture,
low
price.
The
major
problem
with
lack
information,
which
necessary
environment.
In
addition,
scale
change
occlusion
make
object
detection
more
challenging.
Many
deep
learning-based
methods,
such
as
depth
estimation,
have
been
developed
solve
information.
This
survey
presents
image
bounding
box
encoding
techniques
evaluation
metrics.
image-based
methods
are
categorized
based
on
technique
estimate
an
image’s
insights
added
each
method.
Then,
state-of-the-art
(SOTA)
monocular
stereo
camera-based
summarized.
We
also
compare
performance
selected
models
present
challenges
future
directions
detection.
Sensors,
Journal Year:
2022,
Volume and Issue:
22(21), P. 8463 - 8463
Published: Nov. 3, 2022
When
it
comes
to
some
essential
abilities
of
autonomous
ground
vehicles
(AGV),
detection
is
one
them.
In
order
safely
navigate
through
any
known
or
unknown
environment,
AGV
must
be
able
detect
important
elements
on
the
path.
Detection
applicable
both
on-road
and
off-road,
but
they
are
much
different
in
each
environment.
The
key
environment
that
identify
drivable
pathway
whether
there
obstacles
around
it.
Many
works
have
been
published
focusing
components
various
ways.
this
paper,
a
survey
most
recent
advancements
methods
intended
specifically
for
off-road
has
presented.
For
this,
we
divided
literature
into
three
major
groups:
positive
negative
obstacles.
Each
portion
further
multiple
categories
based
technology
used,
example,
single
sensor-based,
how
data
analyzed.
Furthermore,
added
critical
findings
technology,
challenges
associated
with
possible
future
directions.
Authors
believe
work
will
help
reader
finding
who
doing
similar
works.
Journal of Marine Science and Engineering,
Journal Year:
2024,
Volume and Issue:
12(3), P. 415 - 415
Published: Feb. 26, 2024
The
digitization
of
catch
information
for
the
promotion
sustainable
fisheries
is
gaining
momentum
globally.
However,
manual
measurement
fundamental
information,
such
as
species
identification,
length
measurement,
and
fish
count,
highly
inconvenient,
thus
intensifying
call
its
automation.
Recently,
image
recognition
systems
based
on
convolutional
neural
networks
(CNNs)
have
been
extensively
studied
across
diverse
fields.
Nevertheless,
deployment
CNNs
identifying
difficult
owing
to
intricate
nature
managing
a
plethora
species,
which
fluctuate
season
locale,
in
addition
scarcity
public
datasets
encompassing
large
catches.
To
overcome
this
issue,
we
designed
transferable
pre-trained
CNN
model
specifically
can
be
easily
reused
various
fishing
grounds.
Utilizing
an
extensive
photographic
database
from
Japanese
museum,
developed
identification
(TFI)
employing
strategies
multiple
pre-training,
learning
rate
scheduling,
multi-task
learning,
metric
learning.
We
further
introduced
two
application
methods,
namely
transfer
output
layer
masking,
TFI
model,
validating
efficacy
through
rigorous
experiments.
Journal of Marine Science and Engineering,
Journal Year:
2024,
Volume and Issue:
12(6), P. 864 - 864
Published: May 22, 2024
Due
to
the
complexity
of
underwater
environments
and
lack
training
samples,
application
target
detection
algorithms
environment
has
yet
provide
satisfactory
results.
It
is
crucial
design
specialized
recognition
for
different
tasks.
In
order
achieve
this
goal,
we
created
a
dataset
freshwater
fish
captured
from
multiple
angles
lighting
conditions,
aiming
improve
in
natural
environments.
We
propose
method
suitable
detection,
called
DyFish-DETR
(Dynamic
Fish
Detection
with
Transformers).
DyFish-DETR,
DyFishNet
Net)
better
extract
body
texture
features.
A
Slim
Hybrid
Encoder
designed
fuse
feature
information.
The
results
ablation
experiments
show
that
can
effectively
mean
Average
Precision
(mAP)
model
detection.
Frame
Per
Second
(FPS).
Both
reduce
parameters
Floating
Point
Operations
(FLOPs).
our
proposed
dataset,
achieved
mAP
96.6%.
benchmarking
experimental
(AP)
Recall
(AR)
are
higher
than
several
state-of-the-art
methods.
Additionally,
respectively,
99%,
98.8%,
83.2%
other
datasets.
Animals,
Journal Year:
2024,
Volume and Issue:
14(3), P. 499 - 499
Published: Feb. 2, 2024
Blurry
scenarios,
such
as
light
reflections
and
water
ripples,
often
affect
the
clarity
signal-to-noise
ratio
of
fish
images,
posing
significant
challenges
for
traditional
deep
learning
models
in
accurately
recognizing
species.
Firstly,
rely
on
a
large
amount
labeled
data.
However,
it
is
difficult
to
label
data
blurry
scenarios.
Secondly,
existing
need
be
more
effective
processing
bad,
blurry,
otherwise
inadequate
which
an
essential
reason
their
low
recognition
rate.
A
method
based
diffusion
model
attention
mechanism
image
DiffusionFR,
proposed
solve
these
problems
improve
performance
species
images
This
paper
presents
selection
application
this
correcting
technique.
In
method,
two-stage
network
model,
TSD,
designed
deblur
scene
pictures
restore
clarity,
learnable
module,
LAM,
intended
accuracy
recognition.
addition,
new
dataset
BlurryFish,
was
constructed
used
validate
effectiveness
combining
from
publicly
available
Fish4Knowledge.
The
experimental
results
demonstrate
that
DiffusionFR
achieves
outstanding
various
datasets.
On
original
dataset,
achieved
highest
training
97.55%,
well
Top-1
test
score
92.02%
Top-5
95.17%.
Furthermore,
nine
datasets
with
reflection
noise,
mean
values
reached
peak
at
96.50%,
while
were
90.96%
94.12%,
respectively.
Similarly,
three
ripple
95.00%,
89.54%
92.73%,
These
showcases
superior
enhanced
robustness
handling
noise.
Fishes,
Journal Year:
2024,
Volume and Issue:
9(5), P. 151 - 151
Published: April 23, 2024
Traditional
fish
farming
methods
suffer
from
backward
production,
low
efficiency,
yield,
and
environmental
pollution.
As
a
result
of
thorough
research
using
deep
learning
technology,
the
industrial
aquaculture
model
has
experienced
gradual
maturation.
A
variety
complex
factors
makes
it
difficult
to
extract
effective
features,
which
results
in
less-than-good
performance.
This
paper
proposes
detection
method
that
combines
triple
attention
mechanism
with
You
Only
Look
Once
(TAM-YOLO)model.
In
order
enhance
speed
training,
process
data
encapsulation
incorporates
positive
sample
matching.
An
exponential
moving
average
(EMA)
is
incorporated
into
training
make
more
robust,
coordinate
(CA)
convolutional
block
module
are
integrated
YOLOv5s
backbone
feature
extraction
channels
spatial
locations.
The
extracted
maps
input
PANet
path
aggregation
network,
underlying
information
stacked
maps.
improves
accuracy
underwater
blurred
distorted
images.
Experimental
show
proposed
TAM-YOLO
outperforms
YOLOv3,
YOLOv4,
YOLOv5s,
YOLOv5m,
SSD,
mAP
value
95.88%,
thus
providing
new
strategy
for
detection.