International Journal of Imaging Systems and Technology,
Journal Year:
2024,
Volume and Issue:
34(6)
Published: Nov. 1, 2024
ABSTRACT
Breast
cancer
remains
one
of
the
most
significant
health
threats
to
women,
making
precise
segmentation
target
tumors
critical
for
early
clinical
intervention
and
postoperative
monitoring.
While
numerous
convolutional
neural
networks
(CNNs)
vision
transformers
have
been
developed
segment
breast
from
ultrasound
images,
both
architectures
encounter
difficulties
in
effectively
modeling
long‐range
dependencies,
which
are
essential
accurate
segmentation.
Drawing
inspiration
Mamba
architecture,
we
introduce
Vision
Mamba‐CNN
U‐Net
(VMC‐UNet)
tumor
This
innovative
hybrid
framework
merges
dependency
capabilities
with
detailed
local
representation
power
CNNs.
A
key
feature
our
approach
is
implementation
a
residual
connection
method
within
utilizing
visual
state
space
(VSS)
module
extract
features
maps
effectively.
Additionally,
better
integrate
texture
structural
features,
designed
bilinear
multi‐scale
attention
(BMSA),
significantly
enhances
network's
ability
capture
utilize
intricate
details
across
multiple
scales.
Extensive
experiments
conducted
on
three
public
datasets
demonstrate
that
proposed
VMC‐UNet
surpasses
other
state‐of‐the‐art
methods
segmentation,
achieving
Dice
coefficients
81.52%
BUSI,
88.00%
BUS,
88.96%
STU.
The
source
code
accessible
at
https://github.com/windywindyw/VMC‐UNet
.
The
unique
characteristics
of
frescoes
on
overseas
Chinese
buildings
can
attest
to
the
integration
and
historical
background
Western
cultures.
Reasonable
analysis
preservation
provide
sustainable
development
for
culture
history.
This
research
adopts
image
technology
based
artificial
intelligence,
proposes
a
ResNet-34
model
method
integrating
transfer
learning.
deep
learning
identify
classify
source
emigrants,
effectively
deal
with
problems
such
as
small
number
fresco
images
emigrants'
buildings,
poor
quality,
difficulty
in
feature
extraction,
similar
pattern
text
style.
experimental
results
show
that
training
process
proposed
this
article
is
stable.
On
constructed
Jiangmen
Haikou
JHD
datasets,
final
accuracy
98.41%,
recall
rate
98.53%.
above
evaluation
indicators
are
superior
classic
models
AlexNet,
GoogLeNet,
VGGNet.
It
be
seen
has
strong
generalization
ability
not
prone
overfitting.
cultural
connotations
regions
frescoes.