DCN-Deeplabv3+: A Novel Road Segmentation Algorithm Based on Improved Deeplabv3+
IEEE Access,
Год журнала:
2024,
Номер
12, С. 87397 - 87406
Опубликована: Янв. 1, 2024
Язык: Английский
Medical Image Classification Using Lightweight Deep Spiking Neural Network
Iranian Journal of Science and Technology Transactions of Electrical Engineering,
Год журнала:
2025,
Номер
unknown
Опубликована: Апрель 11, 2025
Язык: Английский
Comparison of CNNs and Transformer Models in Diagnosing Bone Metastases in Bone Scans Using Grad-CAM
Clinical Nuclear Medicine,
Год журнала:
2025,
Номер
unknown
Опубликована: Апрель 16, 2025
Purpose:
Convolutional
neural
networks
(CNNs)
have
been
studied
for
detecting
bone
metastases
on
scans;
however,
the
application
of
ConvNeXt
and
transformer
models
has
not
yet
explored.
This
study
aims
to
evaluate
performance
various
deep
learning
models,
including
in
diagnosing
metastatic
lesions
from
scans.
Materials
Methods:
We
retrospectively
analyzed
scans
patients
with
cancer
obtained
at
2
institutions:
training
validation
sets
(n=4626)
were
Hospital
1
test
set
(n=1428)
was
2.
The
evaluated
included
ResNet18,
Data-Efficient
Image
Transformer
(DeiT),
Vision
(ViT
Large
16),
Swin
(Swin
Base),
Large.
Gradient-weighted
class
activation
mapping
(Grad-CAM)
used
visualization.
Results:
Both
demonstrated
that
large
model
(0.969
0.885,
respectively)
exhibited
best
performance,
followed
by
Base
(0.965
0.840,
respectively),
both
which
significantly
outperformed
ResNet
(0.892
0.725,
respectively).
Subgroup
analyses
revealed
all
greater
diagnostic
accuracy
polymetastasis
compared
those
oligometastasis.
Grad-CAM
visualization
focused
more
identifying
local
lesions,
whereas
global
areas
such
as
axial
skeleton
pelvis.
Conclusions:
Compared
traditional
CNN
superior
scans,
especially
cases
polymetastasis,
suggesting
its
potential
medical
image
analysis.
Язык: Английский
Deep Conformal Supervision: Leveraging Intermediate Features for Robust Uncertainty Quantification
Deleted Journal,
Год журнала:
2024,
Номер
unknown
Опубликована: Окт. 7, 2024
Trustworthiness
is
crucial
for
artificial
intelligence
(AI)
models
in
clinical
settings,
and
a
fundamental
aspect
of
trustworthy
AI
uncertainty
quantification
(UQ).
Conformal
prediction
as
robust
(UQ)
framework
has
been
receiving
increasing
attention
valuable
tool
improving
model
trustworthiness.
An
area
active
research
the
method
non-conformity
score
calculation
conformal
prediction.
We
propose
deep
supervision
(DCS),
which
leverages
intermediate
outputs
calculation,
via
weighted
averaging
based
on
inverse
mean
calibration
error
each
stage.
benchmarked
our
two
publicly
available
datasets
focused
medical
image
classification:
pneumonia
chest
radiography
dataset
preprocessed
version
2019
RSNA
Intracranial
Hemorrhage
dataset.
Our
achieved
coverage
errors
16e-4
(CI:
1e-4,
41e-4)
5e-4
10e-4)
compared
to
baseline
28e-4
2e-4,
64e-4)
21e-4
8e-4,
3e-4)
datasets,
respectively
(p
<
0.001
both
datasets).
Based
findings,
results
already
exhibit
small
errors.
However,
shows
significant
improvement
error,
particularly
noticeable
scenarios
involving
smaller
or
when
considering
acceptable
levels,
are
developing
UQ
frameworks
healthcare
applications.
Язык: Английский