Forests,
Journal Year:
2024,
Volume and Issue:
15(11), P. 1908 - 1908
Published: Oct. 29, 2024
The
effective
management
and
conservation
of
forest
resources
hinge
on
accurate
monitoring.
Nonetheless,
individual
remote-sensing
images
captured
by
low-altitude
unmanned
aerial
vehicles
(UAVs)
fail
to
encapsulate
the
entirety
a
forest’s
characteristics.
application
image-stitching
technology
high-resolution
drone
imagery
facilitates
prompt
evaluation
resources,
encompassing
quantity,
quality,
spatial
distribution.
This
study
introduces
an
improved
SIFT
algorithm
designed
tackle
challenges
low
matching
rates
prolonged
registration
times
encountered
with
characterized
dense
textures.
By
implementing
SIFT-OCT
(SIFT
omitting
initial
scale
space)
approach,
bypasses
space,
thereby
reducing
number
ineffective
feature
points
augmenting
processing
efficiency.
To
bolster
algorithm’s
resilience
against
rotation
illumination
variations,
furnish
supplementary
information
for
even
when
fewer
valid
are
available,
gradient
location
orientation
histogram
(GLOH)
descriptor
is
integrated.
For
matching,
more
computationally
efficient
Manhattan
distance
utilized
filter
points,
which
further
optimizes
fast
sample
consensus
(FSC)
then
applied
remove
mismatched
point
pairs,
thus
refining
accuracy.
research
also
investigates
influence
vegetation
coverage
image
overlap
efficacy,
using
five
sets
Cyclobalanopsis
natural
images.
Experimental
outcomes
reveal
that
proposed
method
significantly
reduces
time
average
3.66
compared
SIFT,
1.71
SIFT-OCT,
5.67
PSO-SIFT,
3.42
KAZE,
demonstrating
its
superior
performance.
Information Fusion,
Journal Year:
2024,
Volume and Issue:
108, P. 102369 - 102369
Published: March 22, 2024
Wildfires
have
emerged
as
one
of
the
most
destructive
natural
disasters
worldwide,
causing
catastrophic
losses.
These
losses
underscored
urgent
need
to
improve
public
knowledge
and
advance
existing
techniques
in
wildfire
management.
Recently,
use
Artificial
Intelligence
(AI)
wildfires,
propelled
by
integration
Unmanned
Aerial
Vehicles
(UAVs)
deep
learning
models,
has
created
an
unprecedented
momentum
implement
develop
more
effective
Although
survey
papers
explored
learning-based
approaches
wildfire,
drone
disaster
management,
risk
assessment,
a
comprehensive
review
emphasizing
application
AI-enabled
UAV
systems
investigating
role
methods
throughout
overall
workflow
multi-stage
including
pre-fire
(e.g.,
vision-based
vegetation
fuel
measurement),
active-fire
fire
growth
modeling),
post-fire
tasks
evacuation
planning)
is
notably
lacking.
This
synthesizes
integrates
state-of-the-science
reviews
research
at
nexus
observations
modeling,
AI,
UAVs
-
topics
forefront
advances
elucidating
AI
performing
monitoring
actuation
from
pre-fire,
through
stage,
To
this
aim,
we
provide
extensive
analysis
remote
sensing
with
particular
focus
on
advancements,
device
specifications,
sensor
technologies
relevant
We
also
examine
management
approaches,
monitoring,
prevention
strategies,
well
planning,
damage
operation
strategies.
Additionally,
summarize
wide
range
computer
vision
emphasis
Machine
Learning
(ML),
Reinforcement
(RL),
Deep
(DL)
algorithms
for
classification,
segmentation,
detection,
tasks.
Ultimately,
underscore
substantial
advancement
modeling
cutting-edge
UAV-based
data,
providing
novel
insights
enhanced
predictive
capabilities
understand
dynamic
behavior.
Fire,
Journal Year:
2025,
Volume and Issue:
8(2), P. 59 - 59
Published: Jan. 30, 2025
Forest
fires
pose
a
severe
threat
to
ecological
environments
and
the
safety
of
human
lives
property,
making
real-time
forest
fire
monitoring
crucial.
This
study
addresses
challenges
in
image
object
detection,
including
small
targets,
sparse
smoke,
difficulties
feature
extraction,
by
proposing
TFNet,
Transformer-based
multi-scale
fusion
detection
network.
TFNet
integrates
several
components:
SRModule,
CG-MSFF
Encoder,
Decoder
Head,
WIOU
Loss.
The
SRModule
employs
multi-branch
structure
learn
diverse
representations
images,
utilizing
1
×
convolutions
generate
redundant
maps
enhance
diversity.
Encoder
introduces
context-guided
attention
mechanism
combined
with
adaptive
(AFF),
enabling
effective
reweighting
features
across
layers
extracting
both
local
global
representations.
Head
refine
output
iteratively
optimizing
target
queries
using
self-
cross-attention,
improving
accuracy.
Additionally,
Loss
assigns
varying
weights
IoU
metric
for
predicted
versus
ground
truth
boxes,
thereby
balancing
positive
negative
samples
localization
Experimental
results
on
two
publicly
available
datasets,
D-Fire
M4SFWD,
demonstrate
that
outperforms
comparative
models
terms
precision,
recall,
F1-Score,
mAP50,
mAP50–95.
Specifically,
dataset,
achieved
metrics
81.6%
74.8%
an
F1-Score
78.1%,
mAP50
81.2%,
mAP50–95
46.8%.
On
M4SFWD
these
improved
86.6%
83.3%
84.9%,
89.2%,
52.2%.
proposed
offers
technical
support
developing
efficient
practical
systems.
International journal of engineering. Transactions B: Applications,
Journal Year:
2024,
Volume and Issue:
37(5), P. 1022 - 1035
Published: Jan. 1, 2024
Drone
semantic
segmentation
is
a
challenging
task
in
computer
vision,
mainly
due
to
inherent
complexities
associated
with
aerial
imagery.
This
paper
presents
comprehensive
methodology
for
drone
and
evaluates
its
performance
using
the
ICG
dataset.
The
proposed
method
leverages
hierarchical
multi-scale
feature
extraction
efficient
channel-based
attention
Atrous
Spatial
Pyramid
Pooling
(ASPP)
address
unique
challenges
encountered
this
domain.
In
study,
of
compared
several
state-of-the-art
models.
findings
research
highlight
effectiveness
tackling
segmentation.
outcomes
demonstrate
superiority
over
models,
showcasing
potential
accurate
results
contribute
advancement
drone-based
applications,
such
as
surveillance,
object
tracking,
environmental
monitoring,
where
precise
crucial.
obtained
experimental
that
outperforms
these
existing
approaches
regarding
Dice,
mIOU,
accuracy
metrics.
Specifically,
achieves
an
impressive
scores
86.51%,
76.23%,
91.74%,
respectively.
Remote Sensing Letters,
Journal Year:
2025,
Volume and Issue:
16(3), P. 277 - 289
Published: Jan. 17, 2025
In
this
paper,
we
propose
novel
techniques
for
fire
segmentation
from
unmanned
aerial
vehicle
(UAV)
images.
(1)
We
the
ObjectDetection+CIELAB
thresholding
technique,
which
leverages
a
pre-trained
object
detector
such
as
YOLO
(``you
only
look
once'')
to
generate
bounding
boxes.
then
apply
in
CIELAB
colour
space
within
these
regions
detect
pixels.
This
approach
significantly
improves
speed
by
streamlining
task
into
more
focused
detection
and
classification
task.
(2)
further
introduce
SEG-4CHANNEL
generates
pixel
mask
using
method.
is
integrated
fourth
channel
various
networks,
allowing
models
concentrate
on
while
minimising
background
interference.
(3)
Finally,
explore
AttentionSeg
incorporates
an
attention
module
framework
(e.g.,
SegFormer-B5)
that
utilises
all
four
channels.
It
combines
advantages
of
colour-space
model,
convolution
neural
network
(CNN)
transformer.
large
design
networks
backbones
FLAME
(Fire
Luminosity
Airborne-based
Machine
learning
Evaluation)
dataset.
Our
best
AttentionSeg-B5,
segments
with
intersection-over-union
(IoU)
score
84.15%
91.39%
F1-score.
The
code
has
been
released
at
https://github.com/CandleLabAI/FireSegmentation.
Forests,
Journal Year:
2023,
Volume and Issue:
14(9), P. 1887 - 1887
Published: Sept. 17, 2023
Forest
fires
pose
severe
risks,
including
habitat
loss
and
air
pollution.
Accurate
forest
flame
segmentation
is
vital
for
effective
fire
management
protection
of
ecosystems.
It
improves
detection,
response,
understanding
behavior.
Due
to
the
easy
accessibility
rich
information
content
remote
sensing
images,
techniques
are
frequently
applied
in
segmentation.
With
advancement
deep
learning,
convolutional
neural
network
(CNN)
have
been
widely
adopted
achieved
remarkable
results.
However,
images
often
high
resolutions,
relative
entire
image,
regions
relatively
small,
resulting
class
imbalance
issues.
Additionally,
mainstream
semantic
methods
limited
by
receptive
field
CNNs,
making
it
challenging
effectively
extract
global
features
from
leading
poor
performance
when
relying
solely
on
labeled
datasets.
To
address
these
issues,
we
propose
a
method
based
deeplabV3+
model,
incorporating
following
design
strategies:
(1)
an
adaptive
Copy-Paste
data
augmentation
introduced
learn
samples
(Images
that
cannot
be
adequately
learned
due
other
factors)
effectively,
(2)
transformer
modules
concatenated
parallelly
integrated
into
encoder,
while
CBAM
attention
mechanism
added
decoder
fully
image
features,
(3)
dice
mitigate
problem.
By
conducting
validation
our
self-constructed
dataset,
approach
has
demonstrated
superior
across
multiple
metrics
compared
current
state-of-the-art
methods.
Specifically,
terms
IoU
(Intersection
over
Union),
Precision,
Recall
category,
exhibited
notable
enhancements
4.09%,
3.48%,
1.49%,
respectively,
best-performing
UNet
model.
Moreover,
advancements
11.03%,
9.10%,
4.77%
same
aforementioned
as
baseline
Sampling
and
monitoring
of
epiphytes
growing
in
trees
inside
canopy
using
Unmanned
Aerial
Vehicles(UAV’s)
provides
better
approach
for
Botanist.
However,
the
images
captured
by
UAV’s
usually
contain
complex
background,
uneven
lighting
small
targets.
Apart
from
this,
obtaining
large
number
diverse,
high-quality
target
is
difficult
due
to
accessibility
issues
with
canopy.
This
poses
a
significant
challenge
existing
advanced
segmentation
networks.
In
recent
years,
Deep
Learning
(DL)
has
witnessed
widespread
adoption
image
methodologies,
including
popular
approaches
like
U-shaped
architectures,
vision
transformer-based
models,
hybrid
Nevertheless,
their
reliance
on
substantial
quantities
data
effective
training
limitation
when
applied
smaller
datasets
exhibiting
heterogeneous
quality.
Furthermore,
these
networks
often
incorporate
deep
encoders,
an
increased
convolution
filters,
heightened
emphasis
local
features
rather
than
global
features,
aspects
crucial
achieving
accurate
segmentation.
Appropriate
attention
while
segmenting
target/s
quality
ensures
predictions
boundary
regions
correct
mapping
pixels
respective
classes.
Therefore,
we
propose
multi-branch
dual
network
segment
epiphyte
drone
images.
The
proposed
consist
dedicated
parallel
branches
extracting
during
encoding
stage.
encoder
passed
spatial
channel
modules
relevant
focus
given
important
target.
two
fused
summation
decoder
crossed
fusion
technique
effectively
combine
complement
multiple
branches.
validated
dataset
132
training.
output
compared
state
art
transformer
used
previous
study
[40].
more
predicting
class
labels.
Specifically,
taken
close
target,
under
low
light
are
zoomed
cropped.
We
calculated
Intersection
over
Union
(IoU)
score
conduct
quantitative
analysis
trained
model's
performance
across
various
qualities
test
cases.
qualitative
assessment
predicted
mask
presented
through
falsecolor
images,
highlighting
accurately
regions,
areas
where
failed,
instances
false
predictions.
produced
exhibited
5%
improvement
average
IoU
48%
increase
or
shadow
conditions,
remarkable
68%
that
were
cropped,
as
model.