Applied Sciences,
Journal Year:
2024,
Volume and Issue:
14(22), P. 10536 - 10536
Published: Nov. 15, 2024
Histopathological
analysis
is
an
essential
exam
for
detecting
various
types
of
cancer.
The
process
traditionally
time-consuming
and
laborious.
Taking
advantage
deep
learning
models,
assisting
the
pathologist
in
diagnosis
possible.
In
this
work,
a
study
was
carried
out
based
on
DenseNet
neural
network.
It
consisted
changing
its
architecture
through
combinations
Transformer
MBConv
blocks
to
investigate
impact
classifying
histopathological
images
penile
Due
limited
number
samples
dataset,
pre-training
performed
another
larger
lung
colon
cancer
image
dataset.
Various
these
architectural
components
were
systematically
evaluated
compare
their
performance.
results
indicate
significant
improvements
feature
representation,
demonstrating
effectiveness
combined
elements
resulting
F1-Score
up
95.78%.
Its
diagnostic
performance
confirms
importance
techniques
men’s
health.
Heliyon,
Journal Year:
2024,
Volume and Issue:
10(9), P. e30625 - e30625
Published: May 1, 2024
Automatic
classification
of
colon
and
lung
cancer
images
is
crucial
for
early
detection
accurate
diagnostics.
However,
there
room
improvement
to
enhance
accuracy,
ensuring
better
diagnostic
precision.
This
study
introduces
two
novel
dense
architectures
(D1
D2)
emphasizes
their
effectiveness
in
classifying
from
diverse
images.
It
also
highlights
resilience,
efficiency,
superior
performance
across
multiple
datasets.
These
were
tested
on
various
types
datasets,
including
NCT-CRC-HE-100K
(set
100,000
non-overlapping
image
patches
hematoxylin
eosin
(H&E)
stained
histological
human
colorectal
(CRC)
normal
tissue),
CRC-VAL-HE-7K
7180
N=50
patients
with
adenocarcinoma,
no
overlap
NCT-CRC-HE-100K),
LC25000
(Lung
Colon
Cancer
Histopathological
Image),
IQ-OTHNCCD
(Iraq-Oncology
Teaching
Hospital/National
Center
Diseases),
showcasing
cancers
histopathological
Computed
Tomography
(CT)
scan
underscores
the
multi-modal
capability
proposed
models.
Moreover,
addresses
imbalanced
particularly
IQ-OTHNCCD,
a
specific
focus
model
resilience
robustness.
To
assess
overall
performance,
conducted
experiments
different
scenarios.
The
D1
achieved
an
impressive
99.80%
accuracy
dataset,
Jaccard
Index
(J)
0.8371,
Matthew's
Correlation
Coefficient
(MCC)
0.9073,
Cohen's
Kappa
(Kp)
0.9057,
Critical
Success
(CSI)
0.8213.
When
subjected
10-fold
cross-validation
LC25000,
averaged
(avg)
99.96%
(avg
J,
MCC,
Kp,
CSI
0.9993,
0.9987,
0.9853,
0.9990),
surpassing
recent
reported
performances.
Furthermore,
ensemble
D2
reached
93%
(J,
0.7556,
0.8839,
0.8796,
0.7140)
exceeding
benchmarks
aligning
other
results.
Efficiency
evaluations
For
instance,
training
only
10%
resulted
high
rates
99.19%
0.9840,
0.9898,
0.9837)
(D1)
99.30%
0.9863,
0.9913,
0.9861)
(D2).
In
NCT-CRC-HE-100K,
99.53%
0.9906,
0.9946,
0.9906)
30%
dataset
testing
remaining
70%.
CRC-VAL-HE-7K,
95%
0.8845,
0.9455,
0.9452,
0.8745)
96%
0.8926,
0.9504,
0.9503,
0.8798),
respectively,
outperforming
previously
results
closely
others.
Lastly,
just
significant
outperformance
InceptionV3,
Xception,
DenseNet201
benchmarks,
achieving
rate
82.98%
0.7227,
0.8095,
0.8081,
0.6671).
Finally,
using
explainable
AI
algorithms
such
as
Grad-CAM,
Grad-CAM++,
Score-CAM,
Faster
along
emphasized
versions,
we
visualized
features
last
layer
well
CT-scan
samples.
models,
multi-modality,
robustness,
efficiency
classification,
hold
promise
advancements
medical
They
have
potential
revolutionize
improve
healthcare
accessibility
worldwide.
Technologies,
Journal Year:
2025,
Volume and Issue:
13(2), P. 54 - 54
Published: Feb. 1, 2025
The
automated
and
precise
classification
of
lung
colon
cancer
from
histopathological
photos
continues
to
pose
a
significant
challenge
in
medical
diagnosis,
as
current
computer-aided
diagnosis
(CAD)
systems
are
frequently
constrained
by
their
dependence
on
singular
deep
learning
architectures,
elevated
computational
complexity,
ineffectiveness
utilising
multiscale
features.
To
this
end,
the
present
research
introduces
CAD
system
that
integrates
several
lightweight
convolutional
neural
networks
(CNNs)
with
dual-layer
feature
extraction
selection
overcome
aforementioned
constraints.
Initially,
it
extracts
attributes
two
separate
layers
(pooling
fully
connected)
three
pre-trained
CNNs
(MobileNet,
ResNet-18,
EfficientNetB0).
Second,
uses
benefits
canonical
correlation
analysis
for
dimensionality
reduction
pooling
layer
reduce
complexity.
In
addition,
features
encapsulate
both
high-
low-level
representations.
Finally,
benefit
multiple
network
architectures
while
reducing
proposed
merges
dual
variables
then
applies
variance
(ANOVA)
Chi-Squared
most
discriminative
integrated
CNN
architectures.
is
assessed
LC25000
dataset
leveraging
eight
distinct
classifiers,
encompassing
various
Support
Vector
Machine
(SVM)
variants,
Decision
Trees,
Linear
Discriminant
Analysis,
k-nearest
neighbours.
experimental
results
exhibited
outstanding
performance,
attaining
99.8%
accuracy
cubic
SVM
classifiers
employing
merely
50
ANOVA-selected
features,
exceeding
performance
individual
markedly
diminishing
framework’s
capacity
sustain
exceptional
limited
set
renders
especially
advantageous
clinical
applications
where
diagnostic
precision
efficiency
critical.
These
findings
confirm
efficacy
multi-CNN,
multi-layer
methodology
enhancing
mitigating
constraints
systems.
Journal of Cloud Computing Advances Systems and Applications,
Journal Year:
2024,
Volume and Issue:
13(1)
Published: April 19, 2024
Abstract
The
recent
advancements
in
automated
lung
cancer
diagnosis
through
the
application
of
Convolutional
Neural
Networks
(CNN)
on
Computed
Tomography
(CT)
scans
have
marked
a
significant
leap
medical
imaging
and
diagnostics.
precision
these
CNN-based
classifiers
detecting
analyzing
symptoms
has
opened
new
avenues
early
detection
treatment
planning.
However,
despite
technological
strides,
there
are
critical
areas
that
require
further
exploration
development.
In
this
landscape,
computer-aided
diagnostic
systems
artificial
intelligence,
particularly
deep
learning
methods
like
region
proposal
network,
dual
path
local
binary
patterns,
become
pivotal.
face
challenges
such
as
limited
interpretability,
data
variability
handling
issues,
insufficient
generalization.
Addressing
is
key
to
enhancing
accurate
diagnosis,
fundamental
for
effective
planning
improving
patient
outcomes.
This
study
introduces
an
advanced
approach
combines
Network
with
DenseNet,
leveraging
fusion
mobile
edge
computing
identification
classification.
integration
techniques
enables
system
amalgamate
information
from
multiple
sources,
robustness
accuracy
model.
Mobile
facilitates
faster
processing
analysis
CT
scan
images
by
bringing
computational
resources
closer
source,
crucial
real-time
applications.
undergo
preprocessing,
including
resizing
rescaling,
optimize
feature
extraction.
DenseNet-CNN
model,
strengthened
capabilities,
excels
extracting
features
scans,
effectively
distinguishing
between
healthy
cancerous
tissues.
classification
categories
include
Normal,
Benign,
Malignant,
latter
sub-categorized
into
adenocarcinoma,
squamous
cell
carcinoma,
large
carcinoma.
controlled
experiments,
outperformed
existing
state-of-the-art
methods,
achieving
impressive
99%.
indicates
its
potential
powerful
tool
cancer,
advancement
technology.
Healthcare Technology Letters,
Journal Year:
2025,
Volume and Issue:
12(1)
Published: Jan. 1, 2025
Abstract
Cancer
is
a
condition
in
which
cells
the
body
grow
uncontrollably,
often
forming
tumours
and
potentially
spreading
to
various
areas
of
body.
hazardous
medical
case
history
analysis.
Every
year,
many
people
die
cancer
at
an
early
stage.
Therefore,
it
necessary
accurately
identify
effectively
treat
save
human
lives.
However,
machine
deep
learning
models
are
effective
for
identification.
effectiveness
these
efforts
limited
by
small
dataset
size,
poor
data
quality,
interclass
changes
between
lung
squamous
cell
carcinoma
adenocarcinoma,
difficulties
with
mobile
device
deployment,
lack
image
individual‐level
accuracy
tests.
To
overcome
difficulties,
this
study
proposed
extremely
lightweight
model
using
convolutional
neural
network
that
achieved
98.16%
large
colon
individually
99.02%
99.40%
cancer.
The
used
only
70
thousand
parameters,
highly
real‐time
solutions.
Explainability
methods
such
as
Grad‐CAM
symmetric
explanation
highlight
specific
regions
input
affect
decision
model,
helping
potential
challenges.
will
aid
professionals
developing
automated
accurate
approach
detecting
types