A hybrid explainable model based on advanced machine learning and deep learning models for classifying brain tumors using MRI images
Scientific Reports,
Journal Year:
2025,
Volume and Issue:
15(1)
Published: Jan. 10, 2025
Brain
tumors
present
a
significant
global
health
challenge,
and
their
early
detection
accurate
classification
are
crucial
for
effective
treatment
strategies.
This
study
presents
novel
approach
combining
lightweight
parallel
depthwise
separable
convolutional
neural
network
(PDSCNN)
hybrid
ridge
regression
extreme
learning
machine
(RRELM)
accurately
classifying
four
types
of
brain
(glioma,
meningioma,
no
tumor,
pituitary)
based
on
MRI
images.
The
proposed
enhances
the
visibility
clarity
tumor
features
in
images
by
employing
contrast-limited
adaptive
histogram
equalization
(CLAHE).
A
PDSCNN
is
then
employed
to
extract
relevant
tumor-specific
patterns
while
minimizing
computational
complexity.
RRELM
model
proposed,
enhancing
traditional
ELM
improved
performance.
framework
compared
with
various
state-of-the-art
models
terms
accuracy,
parameters,
layer
sizes.
achieved
remarkable
average
precision,
recall,
accuracy
values
99.35%,
99.30%,
99.22%,
respectively,
through
five-fold
cross-validation.
PDSCNN-RRELM
outperformed
pseudoinverse
(PELM)
exhibited
superior
introduction
led
enhancements
performance
parameters
sizes
those
models.
Additionally,
interpretability
was
demonstrated
using
Shapley
Additive
Explanations
(SHAP),
providing
insights
into
decision-making
process
increasing
confidence
real-world
diagnosis.
Language: Английский
D-YOLO: A Lightweight Model for Strawberry Health Detection
Enhui Wu,
No information about this author
Ruijun Ma,
No information about this author
Daming Dong
No information about this author
et al.
Agriculture,
Journal Year:
2025,
Volume and Issue:
15(6), P. 570 - 570
Published: March 7, 2025
In
complex
agricultural
settings,
accurately
and
rapidly
identifying
the
growth
health
conditions
of
strawberries
remains
a
formidable
challenge.
Therefore,
this
study
aims
to
develop
deep
framework,
Disease-YOLO
(D-YOLO),
based
on
YOLOv8s
model
monitor
status
strawberries.
Key
innovations
include
(1)
replacing
original
backbone
with
MobileNetv3
optimize
computational
efficiency;
(2)
implementing
Bidirectional
Feature
Pyramid
Network
for
enhanced
multi-scale
feature
fusion;
(3)
integrating
Contextual
Transformer
attention
modules
in
neck
network
improve
lesion
localization;
(4)
adopting
weighted
intersection
over
union
loss
address
class
imbalance.
Evaluated
our
custom
strawberry
disease
dataset
containing
1301
annotated
images
across
three
fruit
development
stages
five
plant
states,
D-YOLO
achieved
89.6%
mAP
train
set
90.5%
test
while
reducing
parameters
by
72.0%
floating-point
operations
75.1%
compared
baseline
YOLOv8s.
The
framework’s
balanced
performance
efficiency
surpass
conventional
models
including
Faster
R-CNN,
RetinaNet,
YOLOv5s,
YOLOv6s,
comparative
trials.
Cross-domain
validation
maize
demonstrated
D-YOLO’s
superior
generalization
94.5%
mAP,
outperforming
YOLOv8
0.6%.
(89.6%
training
mAP)
models,
YOLOv8s,
This
lightweight
solution
enables
precise,
real-time
crop
monitoring.
proposed
architectural
improvements
provide
practical
paradigm
intelligent
detection
precision
agriculture.
Language: Английский
Medivision: Empowering Colorectal Cancer Diagnosis and Tumor Localization Through Supervised Learning Classifications and Grad-CAM Visualization of Medical Colonoscopy Images
Cognitive Computation,
Journal Year:
2025,
Volume and Issue:
17(2)
Published: March 21, 2025
Language: Английский
Brain tumor segmentation using multi-scale attention U-Net with EfficientNetB4 encoder for enhanced MRI analysis
R. Preetha,
No information about this author
Jasmine Pemeena Priyadarsini M,
No information about this author
J. S. Nisha
No information about this author
et al.
Scientific Reports,
Journal Year:
2025,
Volume and Issue:
15(1)
Published: March 22, 2025
Abstract
Accurate
brain
tumor
segmentation
is
critical
for
clinical
diagnosis
and
treatment
planning.
This
study
proposes
an
advanced
framework
that
combines
Multiscale
Attention
U-Net
with
the
EfficientNetB4
encoder
to
enhance
performance.
Unlike
conventional
U-Net-based
architectures,
proposed
model
leverages
EfficientNetB4’s
compound
scaling
optimize
feature
extraction
at
multiple
resolutions
while
maintaining
low
computational
overhead.
Additionally,
Multi-Scale
Mechanism
(utilizing
$$1\times
1,
3\times
3$$
,
$$5\times
5$$
kernels)
enhances
representation
by
capturing
boundaries
across
different
scales,
addressing
limitations
of
existing
CNN-based
methods.
Our
approach
effectively
suppresses
irrelevant
regions
localization
through
attention-enhanced
skip
connections
residual
attention
blocks.
Extensive
experiments
were
conducted
on
publicly
available
Figshare
dataset,
comparing
EfficientNet
variants
determine
optimal
architecture.
demonstrated
superior
performance,
achieving
Accuracy
99.79%,
MCR
0.21%,
Dice
Coefficient
0.9339,
Intersection
over
Union
(IoU)
0.8795,
outperforming
other
in
accuracy
efficiency.
The
training
process
was
analyzed
using
key
metrics,
including
Coefficient,
dice
loss,
precision,
recall,
specificity,
IoU,
showing
stable
convergence
generalization.
method
evaluated
against
state-of-the-art
approaches,
surpassing
them
all
accuracy,
mean
IoU.
demonstrates
effectiveness
robust
efficient
tumors,
positioning
it
as
a
valuable
tool
research
applications.
Language: Английский
Improving Malaria diagnosis through interpretable customized CNNs architectures
Scientific Reports,
Journal Year:
2025,
Volume and Issue:
15(1)
Published: Feb. 22, 2025
Abstract
Malaria,
which
is
spread
via
female
Anopheles
mosquitoes
and
brought
on
by
the
Plasmodium
parasite,
persists
as
a
serious
illness,
especially
in
areas
with
high
mosquito
density.
Traditional
detection
techniques,
like
examining
blood
samples
microscope,
tend
to
be
labor-intensive,
unreliable
necessitate
specialized
individuals.
To
address
these
challenges,
we
employed
several
customized
convolutional
neural
networks
(CNNs),
including
Parallel
network
(PCNN),
Soft
Attention
Convolutional
Neural
Networks
(SPCNN),
after
Functional
Block
(SFPCNN),
improve
effectiveness
of
malaria
diagnosis.
Among
these,
SPCNN
emerged
most
successful
model,
outperforming
all
other
models
evaluation
metrics.
The
achieved
precision
99.38
$$\pm$$
0.21%,
recall
99.37
F1
score
accuracy
±
0.30%,
an
area
under
receiver
operating
characteristic
curve
(AUC)
99.95
0.01%,
demonstrating
its
robustness
detecting
parasites.
Furthermore,
various
transfer
learning
(TL)
algorithms,
VGG16,
ResNet152,
MobileNetV3Small,
EfficientNetB6,
EfficientNetB7,
DenseNet201,
Vision
Transformer
(ViT),
Data-efficient
Image
(DeiT),
ImageIntern,
Swin
(versions
v1
v2).
proposed
model
surpassed
TL
methods
every
measure.
2.207
million
parameters
size
26
MB,
more
complex
than
PCNN
but
simpler
SFPCNN.
Despite
this,
exhibited
fastest
testing
times
(0.00252
s),
making
it
computationally
efficient
both
We
assessed
interpretability
using
feature
activation
maps,
Gradient-weighted
Class
Activation
Mapping
(Grad-CAM)
SHapley
Additive
exPlanations
(SHAP)
visualizations
for
three
architectures,
illustrating
why
outperformed
others.
findings
from
our
experiments
show
significant
improvement
parasite
approach
outperforms
traditional
manual
microscopy
terms
speed.
This
study
highlights
importance
utilizing
cutting-edge
technologies
develop
robust
effective
diagnostic
tools
prevention.
Language: Английский