Multi-task interaction learning for accurate segmentation and classification of breast tumors in ultrasound images
Physics in Medicine and Biology,
Journal Year:
2025,
Volume and Issue:
70(6), P. 065006 - 065006
Published: Jan. 24, 2025
Objective.In
breast
diagnostic
imaging,
the
morphological
variability
of
tumors
and
inherent
ambiguity
ultrasound
images
pose
significant
challenges.
Moreover,
multi-task
computer-aided
diagnosis
systems
in
imaging
may
overlook
relationships
between
pixel-wise
segmentation
categorical
classification
tasks.Approach.In
this
paper,
we
propose
a
learning
network
with
deep
inter-task
interactions
that
exploits
inherently
relations
two
tasks.
First,
fuse
self-task
attention
cross-task
mechanisms
to
explore
types
interaction
information,
location
semantic,
In
addition,
feature
aggregation
block
is
developed
based
on
channel
mechanism,
which
reduces
semantic
differences
decoder
encoder.
To
exploit
further,
our
uses
an
circle
training
strategy
refine
heterogeneous
help
maps
obtained
from
previous
training.Main
results.The
experimental
results
show
method
achieved
excellent
performance
BUSI
BUS-B
datasets,
DSCs
81.95%
86.41%
for
tasks,
F1
scores
82.13%
69.01%
respectively.Significance.The
proposed
not
only
enhances
all
tasks
related
tumor
but
also
promotes
research
learning,
providing
further
insights
clinical
applications.
Language: Английский
Empowering early detection: artificial intelligence as a tool for breast cancer diagnosis
Elsevier eBooks,
Journal Year:
2025,
Volume and Issue:
unknown, P. 121 - 145
Published: Jan. 1, 2025
Language: Английский
NMTNet: A Multi-task Deep Learning Network for Joint Segmentation and Classification of Breast Tumors
Xuelian Yang,
No information about this author
Yuanjun Wang,
No information about this author
Li Sui
No information about this author
et al.
Deleted Journal,
Journal Year:
2025,
Volume and Issue:
unknown
Published: Feb. 19, 2025
Segmentation
and
classification
of
breast
tumors
are
two
critical
tasks
since
they
provide
significant
information
for
computer-aided
cancer
diagnosis.
Combining
these
leverages
their
intrinsic
relevance
to
enhance
performance,
but
the
variability
complexity
tumor
characteristics
remain
challenging.
We
propose
a
novel
multi-task
deep
learning
network
(NMTNet)
joint
segmentation
tumors,
which
is
based
on
convolutional
neural
(CNN)
U-shaped
architecture.
It
mainly
comprises
shared
encoder,
multi-scale
fusion
channel
refinement
(MFCR)
module,
branch,
branch.
First,
ResNet18
used
as
backbone
in
encoding
part
feature
representation
capability.
Then,
MFCR
module
introduced
enrich
depth
diversity.
Besides,
branch
combines
lesion
region
enhancement
(LRE)
between
encoder
decoder
parts,
aiming
capture
more
detailed
texture
edge
irregular
improve
accuracy.
The
incorporates
fine-grained
classifier
that
reuses
valuable
discriminate
benign
malignant
tumors.
proposed
NMTNet
evaluated
both
ultrasound
magnetic
resonance
imaging
datasets.
achieves
dice
scores
90.30%
91.50%,
Jaccard
indices
84.70%
88.10%
each
dataset,
respectively.
And
accuracy
87.50%
99.64%
corresponding
datasets,
Experimental
results
demonstrate
superiority
over
state-of-the-art
methods
tasks.
Language: Английский
Variational mode directed deep learning framework for breast lesion classification using ultrasound imaging
Manali Saini,
No information about this author
Sara Hassanzadeh,
No information about this author
Bushira Musa
No information about this author
et al.
Scientific Reports,
Journal Year:
2025,
Volume and Issue:
15(1)
Published: April 24, 2025
Breast
cancer
is
the
most
prevalent
and
second
cause
of
related
death
among
women
in
United
States.
Accurate
early
detection
breast
can
reduce
number
mortalities.
Recent
works
explore
deep
learning
techniques
with
ultrasound
for
detecting
malignant
lesions.
However,
lack
explanatory
features,
need
segmentation,
high
computational
complexity
limit
their
applicability
this
detection.
Therefore,
we
propose
a
novel
ultrasound-based
lesion
classification
framework
that
utilizes
two-dimensional
variational
mode
decomposition
(2D-VMD)
which
provides
self-explanatory
features
guiding
convolutional
neural
network
(CNN)
mixed
pooling
attention
mechanisms.
The
visual
inspection
these
demonstrates
explainability
terms
discriminative
lesion-specific
boundary
texture
decomposed
modes
benign
images,
further
guide
enhanced
classification.
proposed
classify
lesions
accuracies
98%
93%
two
public
datasets
89%
an
in-house
dataset
without
having
to
segment
unlike
existing
techniques,
along
optimal
trade-off
between
sensitivity
specificity.
2D-VMD
improves
areas
under
receiver
operating
characteristics
precision-recall
curves
by
5%
10%
respectively.
method
achieves
relative
improvement
14.47%(8.42%)
(mean
(SD))
accuracy
over
state-of-the-art
methods
one
dataset,
5.75%(4.52%)
another
comparable
performance
methods.
Further,
it
computationally
efficient
reduction
[Formula:
see
text]
floating
point
operations
as
compared
Language: Английский
A multi-task framework for breast cancer segmentation and classification in ultrasound imaging
Computer Methods and Programs in Biomedicine,
Journal Year:
2024,
Volume and Issue:
260, P. 108540 - 108540
Published: Dec. 4, 2024
Ultrasound
(US)
is
a
medical
imaging
modality
that
plays
crucial
role
in
the
early
detection
of
breast
cancer.
The
emergence
numerous
deep
learning
systems
has
offered
promising
avenues
for
segmentation
and
classification
cancer
tumors
US
images.
However,
challenges
such
as
absence
data
standardization,
exclusion
non-tumor
images
during
training,
narrow
view
single-task
methodologies
have
hindered
practical
applicability
these
systems,
often
resulting
biased
outcomes.
This
study
aims
to
explore
potential
multi-task
enhancing
lesions.
Language: Английский
Exploiting K-Space in Magnetic Resonance Imaging Diagnosis: Dual-Path Attention Fusion for K-Space Global and Image Local Features
Bioengineering,
Journal Year:
2024,
Volume and Issue:
11(10), P. 958 - 958
Published: Sept. 25, 2024
Magnetic
resonance
imaging
(MRI)
diagnosis,
enhanced
by
deep
learning
methods,
plays
a
crucial
role
in
medical
image
processing,
facilitating
precise
clinical
diagnosis
and
optimal
treatment
planning.
Current
methodologies
predominantly
focus
on
feature
extraction
from
the
domain,
which
often
results
loss
of
global
features
during
down-sampling
processes.
However,
unique
representational
capacity
MRI
K-space
is
overlooked.
In
this
paper,
we
present
novel
K-space-based
dual-path
attention
fusion
network.
Our
proposed
method
extracts
data
fuses
them
with
local
domain
using
mechanism,
thereby
achieving
accurate
segmentation
for
diagnosis.
Specifically,
our
consists
four
main
components:
an
image-domain
module,
decoder.
We
conducted
ablation
studies
comprehensive
comparisons
Brain
Tumor
Segmentation
(BraTS)
dataset
to
validate
effectiveness
each
module.
The
demonstrate
that
exhibits
superior
performance
diagnostics,
outperforming
state-of-the-art
methods
improvements
up
63.82%
HD95
distance
evaluation
metric.
Furthermore,
performed
generalization
testing
complexity
analysis
Automated
Cardiac
Diagnosis
Challenge
(ACDC)
cardiac
dataset.
findings
indicate
robust
across
different
datasets,
highlighting
strong
generalizability
favorable
algorithmic
complexity.
Collectively,
these
suggest
holds
significant
potential
practical
applications.
Language: Английский