Scientific Reports,
Journal Year:
2024,
Volume and Issue:
14(1)
Published: Oct. 25, 2024
Oral
Squamous
Cell
Carcinoma
(OSCC)
causes
a
severe
challenge
in
oncology
due
to
the
lack
of
diagnostic
devices,
leading
delays
detecting
disorder.
The
OSCC
diagnosis
through
histopathology
demands
pathologist
expert
because
cellular
presentation
is
variable
and
highly
complex.
Existing
approaches
for
have
specific
efficiency
accuracy
restrictions,
highlighting
necessity
more
reliable
techniques.
increase
deep
neural
networks
(DNN)
model
their
applications
medical
imaging
been
instrumental
disease
detection.
Automatic
detection
systems
using
learning
(DL)
show
tremendous
promise
investigating
imagery
with
speed,
efficiency,
accuracy.
In
terms
OSCC,
this
system
allows
method
be
streamlined,
facilitating
earlier
enhancing
survival
rates.
analysis
histopathological
image
(HI)
can
assist
accurately
identifying
tumorous
tissue,
reducing
turnaround
times
increasing
efficacy
pathologists.
This
study
presents
Squeeze-Excitation
Hybrid
Deep
Learning
Recognition
(SEHDL-OSCCR)
on
HIs.
presented
SEHDL-OSCCR
technique
mainly
focuses
oral
cancer
(OC)
hybrid
DL
models.
bilateral
filtering
(BF)
initially
used
remove
noise.
Next,
employs
SE-CapsNet
recognize
feature
extractors.
An
improved
crayfish
optimization
algorithm
(ICOA)
utilized
improve
performance
model.
At
last,
classification
performed
by
employing
convolutional
network
bidirectional
long
short-term
memory
(CNN-BiLSTM)
simulation
results
obtained
are
investigated
benchmark
dataset.
experimental
validation
illustrated
greater
outcome
98.75%
compared
recent
approaches.
BMC Medical Imaging,
Journal Year:
2024,
Volume and Issue:
24(1)
Published: May 11, 2024
Abstract
This
study
addresses
the
critical
challenge
of
detecting
brain
tumors
using
MRI
images,
a
pivotal
task
in
medical
diagnostics
that
demands
high
accuracy
and
interpretability.
While
deep
learning
has
shown
remarkable
success
image
analysis,
there
remains
substantial
need
for
models
are
not
only
accurate
but
also
interpretable
to
healthcare
professionals.
The
existing
methodologies,
predominantly
learning-based,
often
act
as
black
boxes,
providing
little
insight
into
their
decision-making
process.
research
introduces
an
integrated
approach
ResNet50,
model,
combined
with
Gradient-weighted
Class
Activation
Mapping
(Grad-CAM)
offer
transparent
explainable
framework
tumor
detection.
We
employed
dataset
enhanced
through
data
augmentation,
train
validate
our
model.
results
demonstrate
significant
improvement
model
performance,
testing
98.52%
precision-recall
metrics
exceeding
98%,
showcasing
model’s
effectiveness
distinguishing
presence.
application
Grad-CAM
provides
insightful
visual
explanations,
illustrating
focus
areas
making
predictions.
fusion
explainability
holds
profound
implications
diagnostics,
offering
pathway
towards
more
reliable
detection
tools.
BMC Medical Imaging,
Journal Year:
2024,
Volume and Issue:
24(1)
Published: May 15, 2024
Abstract
Brain
tumor
classification
using
MRI
images
is
a
crucial
yet
challenging
task
in
medical
imaging.
Accurate
diagnosis
vital
for
effective
treatment
planning
but
often
hindered
by
the
complex
nature
of
morphology
and
variations
Traditional
methodologies
primarily
rely
on
manual
interpretation
images,
supplemented
conventional
machine
learning
techniques.
These
approaches
lack
robustness
scalability
needed
precise
automated
classification.
The
major
limitations
include
high
degree
intervention,
potential
human
error,
limited
ability
to
handle
large
datasets,
generalizability
diverse
types
imaging
conditions.To
address
these
challenges,
we
propose
federated
learning-based
deep
model
that
leverages
power
Convolutional
Neural
Networks
(CNN)
accurate
brain
This
innovative
approach
not
only
emphasizes
use
modified
VGG16
architecture
optimized
also
highlights
significance
transfer
domain.
Federated
enables
decentralized
training
across
multiple
clients
without
compromising
data
privacy,
addressing
critical
need
confidentiality
handling.
benefits
from
technique
utilizing
pre-trained
CNN,
which
significantly
enhances
its
classify
tumors
accurately
leveraging
knowledge
gained
vast
datasets.Our
trained
dataset
combining
figshare,
SARTAJ,
Br35H
employing
decentralized,
privacy-preserving
training.
adoption
further
bolsters
model’s
performance,
making
it
adept
at
handling
intricate
associated
with
different
tumors.
demonstrates
precision
(0.99
glioma,
0.95
meningioma,
1.00
no
tumor,
0.98
pituitary),
recall,
F1-scores
classification,
outperforming
existing
methods.
overall
accuracy
stands
98%,
showcasing
efficacy
classifying
various
accurately,
thus
highlighting
transformative
enhancing
images.
Frontiers in Medicine,
Journal Year:
2024,
Volume and Issue:
11
Published: March 7, 2024
Breast
cancer,
a
prevalent
cancer
among
women
worldwide,
necessitates
precise
and
prompt
detection
for
successful
treatment.
While
conventional
histopathological
examination
is
the
benchmark,
it
lengthy
process
prone
to
variations
different
observers.
Employing
machine
learning
automate
diagnosis
of
breast
presents
viable
option,
striving
improve
both
precision
speed.
Previous
studies
have
primarily
focused
on
applying
various
deep
models
classification
images.
These
methodologies
leverage
convolutional
neural
networks
(CNNs)
other
advanced
algorithms
differentiate
between
benign
malignant
tumors
from
Current
models,
despite
their
potential,
encounter
obstacles
related
generalizability,
computational
performance,
managing
datasets
with
imbalances.
Additionally,
significant
number
these
do
not
possess
requisite
transparency
interpretability,
which
are
vital
medical
diagnostic
purposes.
To
address
limitations,
our
study
introduces
an
model
based
EfficientNetV2.
This
incorporates
state-of-the-art
techniques
in
image
processing
network
architecture,
aiming
accuracy,
efficiency,
robustness
classification.
We
employed
EfficientNetV2
model,
fine-tuned
specific
task
Our
underwent
rigorous
training
validation
using
BreakHis
dataset,
includes
diverse
Advanced
data
preprocessing,
augmentation
techniques,
cyclical
rate
strategy
were
implemented
enhance
performance.
The
introduced
exhibited
remarkable
efficacy,
attaining
accuracy
99.68%,
balanced
recall
as
indicated
by
F1
score,
considerable
Cohen’s
Kappa
value.
indicators
highlight
model’s
proficiency
correctly
categorizing
images,
surpassing
current
reliability
effectiveness.
research
emphasizes
improved
accessibility,
catering
individuals
disabilities
elderly.
By
enhancing
visual
representation
proposed
approach
aims
make
strides
inclusive
interpretation,
ensuring
equitable
access
information.
BMC Medical Informatics and Decision Making,
Journal Year:
2024,
Volume and Issue:
24(1)
Published: May 27, 2024
Abstract
Lung
cancer
remains
a
leading
cause
of
cancer-related
mortality
globally,
with
prognosis
significantly
dependent
on
early-stage
detection.
Traditional
diagnostic
methods,
though
effective,
often
face
challenges
regarding
accuracy,
early
detection,
and
scalability,
being
invasive,
time-consuming,
prone
to
ambiguous
interpretations.
This
study
proposes
an
advanced
machine
learning
model
designed
enhance
lung
stage
classification
using
CT
scan
images,
aiming
overcome
these
limitations
by
offering
faster,
non-invasive,
reliable
tool.
Utilizing
the
IQ-OTHNCCD
dataset,
comprising
scans
from
various
stages
healthy
individuals,
we
performed
extensive
preprocessing
including
resizing,
normalization,
Gaussian
blurring.
A
Convolutional
Neural
Network
(CNN)
was
then
trained
this
preprocessed
data,
class
imbalance
addressed
Synthetic
Minority
Over-sampling
Technique
(SMOTE).
The
model’s
performance
evaluated
through
metrics
such
as
precision,
recall,
F1-score,
ROC
curve
analysis.
results
demonstrated
accuracy
99.64%,
F1-score
values
exceeding
98%
across
all
categories.
SMOTE
enhanced
ability
classify
underrepresented
classes,
contributing
robustness
These
findings
underscore
potential
in
transforming
diagnostics,
providing
high
classification,
which
could
facilitate
detection
tailored
treatment
strategies,
ultimately
improving
patient
outcomes.
BMC Medical Informatics and Decision Making,
Journal Year:
2024,
Volume and Issue:
24(1)
Published: April 30, 2024
Brain
tumors
pose
a
significant
medical
challenge
necessitating
precise
detection
and
diagnosis,
especially
in
Magnetic
resonance
imaging(MRI).
Current
methodologies
reliant
on
traditional
image
processing
conventional
machine
learning
encounter
hurdles
accurately
discerning
tumor
regions
within
intricate
MRI
scans,
often
susceptible
to
noise
varying
quality.
The
advent
of
artificial
intelligence
(AI)
has
revolutionized
various
aspects
healthcare,
providing
innovative
solutions
for
diagnostics
treatment
strategies.
This
paper
introduces
novel
AI-driven
methodology
brain
from
images,
leveraging
the
EfficientNetB2
deep
architecture.
Our
approach
incorporates
advanced
preprocessing
techniques,
including
cropping,
equalization,
application
homomorphic
filters,
enhance
quality
data
more
accurate
detection.
proposed
model
exhibits
substantial
performance
enhancement
by
demonstrating
validation
accuracies
99.83%,
99.75%,
99.2%
BD-BrainTumor,
Brain-tumor-detection,
Brain-MRI-images-for-brain-tumor-detection
datasets
respectively,
this
research
holds
promise
refined
clinical
patient
care,
fostering
reliable
identification
images.
All
is
available
Github:
https://github.com/muskan258/Brain-Tumor-Detection-from-MRI-Images-Utilizing-EfficientNetB2
).
BMC Medical Imaging,
Journal Year:
2024,
Volume and Issue:
24(1)
Published: Aug. 2, 2024
Abstract
Skin
cancer
stands
as
one
of
the
foremost
challenges
in
oncology,
with
its
early
detection
being
crucial
for
successful
treatment
outcomes.
Traditional
diagnostic
methods
depend
on
dermatologist
expertise,
creating
a
need
more
reliable,
automated
tools.
This
study
explores
deep
learning,
particularly
Convolutional
Neural
Networks
(CNNs),
to
enhance
accuracy
and
efficiency
skin
diagnosis.
Leveraging
HAM10000
dataset,
comprehensive
collection
dermatoscopic
images
encompassing
diverse
range
lesions,
this
introduces
sophisticated
CNN
model
tailored
nuanced
task
lesion
classification.
The
model’s
architecture
is
intricately
designed
multiple
convolutional,
pooling,
dense
layers,
aimed
at
capturing
complex
visual
features
lesions.
To
address
challenge
class
imbalance
within
an
innovative
data
augmentation
strategy
employed,
ensuring
balanced
representation
each
category
during
training.
Furthermore,
optimized
layer
configuration
augmentation,
significantly
boosting
precision
detection.
learning
process
using
Adam
optimizer,
parameters
fine-tuned
over
50
epochs
batch
size
128
ability
discern
subtle
patterns
image
data.
A
Model
Checkpoint
callback
ensures
preservation
best
iteration
future
use.
proposed
demonstrates
97.78%
notable
97.9%,
recall
F2
score
97.8%,
underscoring
potential
robust
tool
classification
cancer,
thereby
supporting
clinical
decision-making
contributing
improved
patient
outcomes
dermatology.
Current Oncology,
Journal Year:
2024,
Volume and Issue:
31(9), P. 5255 - 5290
Published: Sept. 6, 2024
Artificial
intelligence
(AI)
is
revolutionizing
head
and
neck
cancer
(HNC)
care
by
providing
innovative
tools
that
enhance
diagnostic
accuracy
personalize
treatment
strategies.
This
review
highlights
the
advancements
in
AI
technologies,
including
deep
learning
natural
language
processing,
their
applications
HNC.
The
integration
of
with
imaging
techniques,
genomics,
electronic
health
records
explored,
emphasizing
its
role
early
detection,
biomarker
discovery,
planning.
Despite
noticeable
progress,
challenges
such
as
data
quality,
algorithmic
bias,
need
for
interdisciplinary
collaboration
remain.
Emerging
innovations
like
explainable
AI,
AI-powered
robotics,
real-time
monitoring
systems
are
poised
to
further
advance
field.
Addressing
these
fostering
among
experts,
clinicians,
researchers
crucial
developing
equitable
effective
applications.
future
HNC
holds
significant
promise,
offering
potential
breakthroughs
diagnostics,
personalized
therapies,
improved
patient
outcomes.
Frontiers in Computational Neuroscience,
Journal Year:
2024,
Volume and Issue:
18
Published: June 12, 2024
The
necessity
of
prompt
and
accurate
brain
tumor
diagnosis
is
unquestionable
for
optimizing
treatment
strategies
patient
prognoses.
Traditional
reliance
on
Magnetic
Resonance
Imaging
(MRI)
analysis,
contingent
upon
expert
interpretation,
grapples
with
challenges
such
as
time-intensive
processes
susceptibility
to
human
error.
BMC Medical Imaging,
Journal Year:
2024,
Volume and Issue:
24(1)
Published: Sept. 2, 2024
Breast
cancer
is
a
leading
cause
of
mortality
among
women
globally,
necessitating
precise
classification
breast
ultrasound
images
for
early
diagnosis
and
treatment.
Traditional
methods
using
CNN
architectures
such
as
VGG,
ResNet,
DenseNet,
though
somewhat
effective,
often
struggle
with
class
imbalances
subtle
texture
variations,
to
reduced
accuracy
minority
classes
malignant
tumors.
To
address
these
issues,
we
propose
methodology
that
leverages
EfficientNet-B7,
scalable
architecture,
combined
advanced
data
augmentation
techniques
enhance
representation
improve
model
robustness.
Our
approach
involves
fine-tuning
EfficientNet-B7
on
the
BUSI
dataset,
implementing
RandomHorizontalFlip,
RandomRotation,
ColorJitter
balance
dataset
The
training
process
includes
stopping
prevent
overfitting
optimize
performance
metrics.
Additionally,
integrate
Explainable
AI
(XAI)
techniques,
Grad-CAM,
interpretability
transparency
model's
predictions,
providing
visual
quantitative
insights
into
features
regions
influencing
outcomes.
achieves
99.14%,
significantly
outperforming
existing
CNN-based
approaches
in
image
classification.
incorporation
XAI
enhances
our
understanding
decision-making
process,
thereby
increasing
its
reliability
facilitating
clinical
adoption.
This
comprehensive
framework
offers
robust
interpretable
tool
detection
cancer,
advancing
capabilities
automated
diagnostic
systems
supporting
processes.
Diagnostics,
Journal Year:
2025,
Volume and Issue:
15(3), P. 260 - 260
Published: Jan. 23, 2025
Background/Objectives:
Squamous
cell
carcinoma
(SCC),
a
prevalent
form
of
skin
cancer,
presents
diagnostic
challenges,
particularly
in
resource-limited
settings
with
low-quality
imaging
infrastructure.
The
accurate
classification
SCC
margins
is
essential
to
guide
effective
surgical
interventions
and
reduce
recurrence
rates.
This
study
proposes
vision
transformer
(ViT)-based
model
improve
margin
by
addressing
the
limitations
convolutional
neural
networks
(CNNs)
analyzing
histopathological
images.
Methods:
introduced
transfer
learning
approach
using
ViT
architecture
customized
additional
flattening,
batch
normalization,
dense
layers
enhance
its
capability
for
classification.
A
performance
evaluation
was
conducted
machine
metrics
averaged
over
five-fold
cross-validation
comparisons
were
made
leading
CNN
models.
Ablation
studies
have
explored
effects
architectural
configuration
on
performance.
Results:
ViT-based
achieved
superior
0.928
±
0.027
accuracy
0.927
0.028
AUC,
surpassing
highest
performing
model,
InceptionV3
(accuracy:
0.86
0.049;
AUC:
0.837
0.029),
demonstrating
robustness
reinforced
importance
tailored
configurations
enhancing
Conclusions:
underscores
transformative
potential
ViTs
analysis,
especially
settings.
By
reducing
dependence
high-quality
specialized
expertise,
it
scalable
solution
global
cancer
diagnostics.
Future
research
should
prioritize
optimizing
such
environments
broadening
their
clinical
applications.