PLoS ONE,
Journal Year:
2024,
Volume and Issue:
19(11), P. e0313323 - e0313323
Published: Nov. 25, 2024
Sea
turtles
exhibit
high
migratory
rates
and
occupy
a
broad
range
of
habitats,
which
in
turn
makes
monitoring
these
taxa
challenging.
Applying
deep
learning
(DL)
models
to
vast
image
datasets
collected
from
citizen
science
programs
can
offer
promising
solutions
overcome
the
challenge
wide
habitats
wildlife,
particularly
sea
turtles.
Among
DL
models,
object
detection
such
as
You
Only
Look
Once
(YOLO)
series,
have
been
extensively
employed
for
wildlife
classification.
Despite
their
successful
application
this
domain,
detecting
objects
images
with
complex
backgrounds,
including
underwater
environments,
remains
significant
challenge.
Recently,
instance
segmentation
developed
address
issue
by
providing
more
accurate
classification
compared
traditional
models.
This
study
performance
two
state-of-the-art
methods
namely;
model
(YOLOv5)
(YOLOv5-seg),
detect
classify
The
were
iNaturalist
Google
then
divided
into
64%
training,
16%
validation,
20%
test
sets.
Model
during
after
finishing
training
was
evaluated
loss
functions
various
indexes,
respectively.
Based
on
functions,
YOLOv5-seg
demonstrated
lower
error
rate
rather
than
classifying
YOLOv5.
According
mean
Average
Precision
(mAP)
values,
reflect
precision
recall,
showed
superior
mAP0.5
mAP0.5:0.95
YOLOv5
0.885
0.795,
respectively,
whereas
YOLOv5-seg,
values
0.918
0.831,
In
particular,
based
results,
improved
results
may
help
improve
turtle
future.
Journal of Pollination Ecology,
Journal Year:
2025,
Volume and Issue:
37, P. 1 - 21
Published: Jan. 10, 2025
Monitoring
plant-pollinator
interactions
is
crucial
for
understanding
the
factors
influencing
these
relationships
across
space
and
time.
Traditional
methods
in
pollination
ecology
are
resource-intensive,
while
time-lapse
photography
offers
potential
non-destructive
automated
complementary
techniques.
However,
accurate
identification
of
pollinators
at
finer
taxonomic
levels
(i.e.,
genus
or
species)
requires
high
enough
image
quality.
This
study
assessed
feasibility
using
a
smartphone
setup
to
capture
images
arthropods
visiting
flowers
evaluated
whether
offered
sufficient
resolution
arthropod
by
taxonomists.
Smartphones
were
positioned
above
target
from
various
plant
species
urban
green
areas
around
Leipzig
Halle,
Germany.
We
present
proportions
identifications
(instances)
different
(order,
family,
genus,
based
on
visible
features
as
interpreted
document
limitations
stem
(e.g.,
fixed
positioning
preventing
distinguishing
despite
resolution)
low
Recommendations
provided
address
challenges.
Our
results
indicate
that
89.81%
all
Hymenoptera
instances
identified
family
level,
84.56%
pollinator
only
25.35%
level.
less
able
identify
Dipterans
levels,
with
nearly
50%
not
identifiable
26.18%
15.19%
levels.
was
due
their
small
size
more
challenging
needed
wing
veins).
Advancing
technology,
along
accessibility,
affordability,
user-friendliness,
promising
option
coarse-level
monitoring.
PLoS ONE,
Journal Year:
2024,
Volume and Issue:
19(4), P. e0295474 - e0295474
Published: April 3, 2024
Insect
monitoring
is
essential
to
design
effective
conservation
strategies,
which
are
indispensable
mitigate
worldwide
declines
and
biodiversity
loss.
For
this
purpose,
traditional
methods
widely
established
can
provide
data
with
a
high
taxonomic
resolution.
However,
processing
of
captured
insect
samples
often
time-consuming
expensive,
limits
the
number
potential
replicates.
Automated
facilitate
collection
at
higher
spatiotemporal
resolution
comparatively
lower
effort
cost.
Here,
we
present
Detect
DIY
(do-it-yourself)
camera
trap
for
non-invasive
automated
flower-visiting
insects,
based
on
low-cost
off-the-shelf
hardware
components
combined
open-source
software.
Custom
trained
deep
learning
models
detect
track
insects
landing
an
artificial
flower
platform
in
real
time
on-device
subsequently
classify
cropped
detections
local
computer.
Field
deployment
solar-powered
confirmed
its
resistance
temperatures
humidity,
enables
autonomous
during
whole
season.
On-device
detection
tracking
estimate
activity/abundance
after
metadata
post-processing.
Our
classification
model
achieved
top-1
accuracy
test
dataset
generalized
well
real-world
images.
The
software
highly
customizable
be
adapted
different
use
cases.
With
custom
models,
as
accessible
programming,
many
possible
applications
surpassing
our
proposed
method
realized.
Scientific Reports,
Journal Year:
2025,
Volume and Issue:
15(1)
Published: Jan. 30, 2025
Abstract
Numerous
studies
have
proven
the
potential
of
deep
learning
models
for
classifying
wildlife.
Such
can
reduce
workload
experts
by
automating
species
classification
to
monitor
wild
populations
and
global
trade.
Although
typically
perform
better
with
more
input
data,
available
wildlife
data
are
ordinarily
limited,
specifically
rare
or
endangered
species.
Recently,
citizen
science
programs
helped
accumulate
valuable
but
such
is
still
not
enough
achieve
best
performance
compared
benchmark
datasets.
Recent
applied
hierarchical
a
given
dataset
improve
model
accuracy.
This
study
transfer
Amazon
parrot
Specifically,
hierarchy
was
built
based
on
diagnostic
morphological
features.
Upon
evaluating
performance,
outperformed
non-hierarchical
in
detecting
parrots.
Notably,
achieved
mean
Average
Precision
(mAP)
0.944,
surpassing
mAP
0.908
model.
Moreover,
improved
accuracy
between
morphologically
similar
The
outcomes
this
may
facilitate
monitoring
trade
parrots
conservation
purposes.
bioRxiv (Cold Spring Harbor Laboratory),
Journal Year:
2024,
Volume and Issue:
unknown
Published: April 15, 2024
Abstract
Arthropods,
including
insects,
represent
the
most
diverse
group
and
contribute
significantly
to
animal
biomass.
Automatic
monitoring
of
insects
other
arthropods
enables
quick
efficient
observation
management
ecologically
economically
important
targets
such
as
pollinators,
natural
enemies,
disease
vectors,
agricultural
pests.
The
integration
cameras
computer
vision
facilitates
innovative
approaches
for
agriculture,
ecology,
entomology,
evolution,
biodiversity.
However,
studying
their
interactions
with
flowers
vegetation
in
environments
remains
challenging,
even
automated
camera
monitoring.
This
paper
presents
a
comprehensive
methodology
monitor
abundance
diversity
wild
quantify
floral
cover
key
resource.
We
apply
methods
across
more
than
10
million
images
recorded
over
two
years
using
48
insect
traps
placed
three
main
habitat
types.
arthropods,
visits,
on
specific
mix
Sedum
plant
species
white,
yellow
red/pink
colored
flowers.
proposed
deep-learning
pipeline
estimates
flower
detects
classifies
arthropod
taxa
from
time-lapse
recordings.
serves
only
an
estimate
correlate
activity
flowering
plants.
Color
semantic
segmentation
DeepLabv3
are
combined
percent
different
colors.
Arthropod
detection
incorporates
motion-informed
enhanced
object
You-Only-Look-Once
(YOLO),
followed
by
filtering
stationary
objects
minimize
double
counting
non-moving
animals
erroneous
background
detections.
approach
has
been
demonstrated
decrease
incidence
false
positives,
since
occur
less
3%
captured
images.
final
step
involves
grouping
into
19
taxonomic
classes.
Seven
state-of-the-art
models
were
trained
validated,
achieving
F
1-scores
ranging
0.81
0.89
classification
arthropods.
Among
these,
selected
model,
EfficientNetB4,
achieved
80%
average
precision
randomly
samples
when
applied
complete
pipeline,
which
includes
detection,
filtering,
collected
2021.
As
expected
during
beginning
end
season,
reduced
correlates
noticeable
drop
method
offers
cost-effective