Computer-Aided Civil and Infrastructure Engineering,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Nov. 8, 2024
Abstract
In
damage‐level
classification,
deep
learning.
models
are
more
likely
to
focus
on
regions
unrelated
classification
targets
because
of
the
complexities
inherent
in
real
data,
such
as
diversity
damages
(e.g.,
crack,
efflorescence,
and
corrosion).
This
causes
performance
degradation.
To
solve
this
problem,
it
is
necessary
handle
data
complexity
uncertainty.
study
proposes
a
multimodal
learning
model
that
can
damaged
using
text
related
damage
images,
materials
components.
Furthermore,
by
adjusting
effect
attention
maps
based
confidence
calculated
when
estimating
these
maps,
proposed
method
realizes
an
accurate
classification.
Our
contribution
development
with
end‐to‐end
mechanism
simultaneously
consider
both
image
map.
Finally,
experiments
images
validate
effectiveness
method.
Computer-Aided Civil and Infrastructure Engineering,
Journal Year:
2024,
Volume and Issue:
39(23), P. 3646 - 3665
Published: April 25, 2024
Abstract
To
solve
the
challenges
of
low
recognition
accuracy,
slow
speed,
and
weak
generalization
ability
inherent
in
traditional
methods
for
multi‐damage
concrete
bridges,
this
paper
proposed
an
efficient
lightweight
damage
model,
constructed
by
improving
you
only
look
once
v4
(YOLOv4)
with
MobileNetv3
fused
inverted
residual
blocks,
named
YOLOMF.
First,
a
novel
network
(MobileNetv3‐FusedIR)
is
as
backbone
This
achieved
integrating
mobile
bottleneck
convolution
(Fused‐MBConv)
into
shallow
layers
MobileNetv3.
Second,
standard
YOLOv4
replaced
depthwise
separable
convolution,
resulting
reduction
number
parameters
complexity
model.
Third,
effects
different
activation
functions
on
performance
YOLOMF
are
thoroughly
investigated.
Finally,
to
verify
effectiveness
method
complex
environments,
data
enhancement
library
Imgaug
used
simulate
bridge
images
under
challenging
conditions
such
motion
blur,
fog,
rain,
snow,
noise,
color
variations.
The
results
indicate
that
shows
excellent
proficiency
bridges
across
varying
field‐of‐view
sizes
well
environmental
conditions.
detection
speed
reaches
85f/s,
facilitating
effective
real‐time
environments.
Computer-Aided Civil and Infrastructure Engineering,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Dec. 25, 2024
Abstract
The
steel
girder
of
high‐speed
railway
bridges
requires
regular
inspections
to
ensure
bridge
stability
and
provide
a
safe
environment
for
operations.
Unmanned
aerial
vehicle
(UAV)‐based
inspection
has
great
potential
become
an
efficient
solution
by
offering
superior
perspectives
mitigating
safety
concerns.
Unfortunately,
classic
convolutional
neural
network
(CNN)
models
suffer
from
limited
detection
accuracy
or
redundant
model
parameters,
existing
CNN‐based
systems
are
only
designed
single
visual
task
(e.g.,
bolt
rust
parsing
only).
This
paper
develops
novel
bi‐task
(i.e.,
BGInet)
recognize
different
types
surface
defects
on
UAV
imagery.
First,
the
assembles
advanced
branch
that
integrates
sparse
attention
module,
extended
linear
aggregation
network,
RepConv
solve
small
object
with
scarce
samples
complete
defect
identification.
Then,
innovative
U‐shape
saliency
is
integrated
into
this
system
supplement
parse
regions.
Smoothly,
pixel‐to‐real‐world
mapping
utilizing
critical
flight
parameters
also
developed
assembled
measure
areas.
Finally,
extensive
experiments
conducted
UAV‐based
dataset
show
our
method
achieves
better
over
current
yet
remains
reasonably
high
inference
speed.
performance
illustrates
can
effectively
turn
imagery
useful
information.
Sensors,
Journal Year:
2023,
Volume and Issue:
24(1), P. 3 - 3
Published: Dec. 19, 2023
The
morphological
characteristics
of
a
crack
serve
as
crucial
indicators
for
rating
the
condition
concrete
bridge
components.
Previous
studies
have
predominantly
employed
deep
learning
techniques
pixel-level
detection,
while
occasionally
incorporating
monocular
devices
to
quantify
dimensions.
However,
practical
implementation
such
methods
with
assistance
robots
or
unmanned
aerial
vehicles
(UAVs)
is
severely
hindered
due
their
restrictions
in
frontal
image
acquisition
at
known
distances.
To
explore
non-contact
inspection
approach
enhanced
flexibility,
efficiency
and
accuracy,
binocular
stereo
vision-based
method
full
convolutional
network
(FCN)
proposed
detecting
measuring
cracks.
Firstly,
our
FCN
leverages
benefits
encoder-decoder
architecture
enable
precise
segmentation
simultaneously
emphasizing
edge
details
rate
approximately
four
pictures
per
second
database
that
dominated
by
complex
background
training
results
demonstrate
precision
83.85%,
recall
85.74%
an
F1
score
84.14%.
Secondly,
utilization
vision
improves
shooting
flexibility
streamlines
process.
Furthermore,
introduction
central
projection
scheme
achieves
reliable
three-dimensional
(3D)
reconstruction
morphology,
effectively
avoiding
mismatches
between
two
views
providing
more
comprehensive
dimensional
depiction
An
experimental
test
also
conducted
on
cracked
specimens,
where
relative
measurement
error
width
ranges
from
-3.9%
36.0%,
indicating
feasibility
method.
Computer-Aided Civil and Infrastructure Engineering,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Sept. 30, 2024
Abstract
Segmentation
of
structural
components
in
infrastructure
inspection
images
is
crucial
for
automated
and
accurate
condition
assessment.
While
deep
neural
networks
hold
great
potential
this
task,
existing
methods
typically
require
fully
annotated
ground
truth
masks,
which
are
time‐consuming
labor‐intensive
to
create.
This
paper
introduces
Scrib
ble‐supervised
Structural
Comp
onent
Net
work
(ScribCompNet),
the
first
weakly‐supervised
method
requiring
only
scribble
annotations
multiclass
component
segmentation.
ScribCompNet
features
a
dual‐branch
architecture
with
higher‐resolution
refinement
enhance
fine
detail
detection.
It
extends
supervision
from
labeled
unlabeled
pixels
through
combined
objective
function,
incorporating
annotation,
dynamic
pseudo
label,
semantic
context
enhancement,
scale‐adaptive
harmony
losses.
Experimental
results
show
that
outperforms
other
scribble‐supervised
most
fully‐supervised
counterparts,
achieving
90.19%
mean
intersection
over
union
(mIoU)
an
80%
reduction
labeling
time.
Further
evaluations
confirm
effectiveness
novel
designs
robust
performance,
even
lower‐quality
annotations.