Radiotherapy and Oncology,
Год журнала:
2025,
Номер
unknown, С. 110852 - 110852
Опубликована: Март 1, 2025
In
the
HECKTOR
2022
challenge
set
[1],
several
state-of-the-art
(SOTA,
achieving
best
performance)
deep
learning
models
were
introduced
for
predicting
recurrence-free
period
(RFP)
in
head
and
neck
cancer
patients
using
PET
CT
images.
This
study
investigates
whether
a
conventional
DenseNet
architecture,
with
optimized
numbers
of
layers
image-fusion
strategies,
could
achieve
comparable
performance
as
SOTA
models.
The
dataset
comprises
489
oropharyngeal
(OPC)
from
seven
distinct
centers.
It
was
randomly
divided
into
training
(n
=
369)
an
independent
test
120).
Furthermore,
additional
400
OPC
patients,
who
underwent
chemo(radiotherapy)
at
our
center,
employed
external
testing.
Each
patients'
data
included
pre-treatment
CT-
PET-scans,
manually
generated
GTV
(Gross
tumour
volume)
contours
primary
tumors
lymph
nodes,
RFP
information.
present
compared
against
three
developed
on
dataset.
When
inputting
CT,
early
fusion
(considering
them
different
channels
input)
approach,
DenseNet81
(with
81
layers)
obtained
internal
C-index
0.69,
metric
Notably,
removal
input
yielded
same
0.69
while
improving
0.59
to
0.63.
PET-only
models,
when
utilizing
late
(concatenation
extracted
features)
PET,
demonstrated
superior
values
0.68
0.66
both
sets,
better
only
set.
basic
architecture
predictive
par
featuring
more
intricate
architectures
set,
test.
imaging
Scientific Reports,
Год журнала:
2025,
Номер
15(1)
Опубликована: Март 10, 2025
Existing
deep
learning
methods
have
achieved
significant
success
in
medical
image
segmentation.
However,
this
largely
relies
on
stacking
advanced
modules
and
architectures,
which
has
created
a
path
dependency.
This
dependency
is
unsustainable,
as
it
leads
to
increasingly
larger
model
parameters
higher
deployment
costs.
To
break
dependency,
we
introduce
reinforcement
enhance
segmentation
performance.
current
face
challenges
such
high
training
cost,
independent
iterative
processes,
uncertainty
of
masks.
Consequently,
propose
Pixel-level
Deep
Reinforcement
Learning
with
pixel-by-pixel
Mask
Generation
(PixelDRL-MG)
for
more
accurate
robust
PixelDRL-MG
adopts
dynamic
update
policy,
directly
segmenting
the
regions
interest
without
requiring
user
interaction
or
coarse
We
Asynchronous
Advantage
Actor-Critic
(PA3C)
strategy
treat
each
pixel
an
agent
whose
state
(foreground
background)
iteratively
updated
through
direct
actions.
Our
experiments
two
commonly
used
datasets
demonstrate
that
achieves
superior
performances
than
state-of-the-art
baselines
(especially
boundaries)
using
significantly
fewer
parameters.
also
conducted
detailed
ablation
studies
understanding
facilitate
practical
application.
Additionally,
performs
well
low-resource
settings
(i.e.,
50-shot
100-shot),
making
ideal
choice
real-world
scenarios.
Radiotherapy and Oncology,
Год журнала:
2025,
Номер
unknown, С. 110852 - 110852
Опубликована: Март 1, 2025
In
the
HECKTOR
2022
challenge
set
[1],
several
state-of-the-art
(SOTA,
achieving
best
performance)
deep
learning
models
were
introduced
for
predicting
recurrence-free
period
(RFP)
in
head
and
neck
cancer
patients
using
PET
CT
images.
This
study
investigates
whether
a
conventional
DenseNet
architecture,
with
optimized
numbers
of
layers
image-fusion
strategies,
could
achieve
comparable
performance
as
SOTA
models.
The
dataset
comprises
489
oropharyngeal
(OPC)
from
seven
distinct
centers.
It
was
randomly
divided
into
training
(n
=
369)
an
independent
test
120).
Furthermore,
additional
400
OPC
patients,
who
underwent
chemo(radiotherapy)
at
our
center,
employed
external
testing.
Each
patients'
data
included
pre-treatment
CT-
PET-scans,
manually
generated
GTV
(Gross
tumour
volume)
contours
primary
tumors
lymph
nodes,
RFP
information.
present
compared
against
three
developed
on
dataset.
When
inputting
CT,
early
fusion
(considering
them
different
channels
input)
approach,
DenseNet81
(with
81
layers)
obtained
internal
C-index
0.69,
metric
Notably,
removal
input
yielded
same
0.69
while
improving
0.59
to
0.63.
PET-only
models,
when
utilizing
late
(concatenation
extracted
features)
PET,
demonstrated
superior
values
0.68
0.66
both
sets,
better
only
set.
basic
architecture
predictive
par
featuring
more
intricate
architectures
set,
test.
imaging