2022 5th International Conference on Information and Communications Technology (ICOIACT),
Journal Year:
2023,
Volume and Issue:
unknown, P. 200 - 205
Published: Nov. 10, 2023
Lung
cancer
has
been
a
leading
cause
of
cancer-related
deaths,
with
the
number
fatalities
in
United
Kingdom
between
2017
and
2019
reaching
34771,
as
reported
by
Cancer
Research
UK.
is
when
cells
inside
lung
grow
uncontrollably.
Detecting
nodules
at
an
early
stage
can
increase
chances
survival
for
humans.
Researchers
have
investigating
potential
artificial
intelligence
deep
learning
to
develop
computer-aided
detection
(CAD)
systems
automated
classification.
CAD
could
help
radiologists
detect
improve
diagnosis
accuracy.
Our
systematic
literature
review
provided
overview
performance
current
deep-learning
methods
datasets
detecting
classifying
using
CT
images.
We
conducted
PRISMA
2020.
This
paper
gives
reader
insights
into
various
facets
motivates
researchers
further
explore
opportunities
crafting
models
that
be
seamlessly
integrated
system.
BMC Medical Informatics and Decision Making,
Journal Year:
2024,
Volume and Issue:
24(1)
Published: May 27, 2024
Abstract
Lung
cancer
remains
a
leading
cause
of
cancer-related
mortality
globally,
with
prognosis
significantly
dependent
on
early-stage
detection.
Traditional
diagnostic
methods,
though
effective,
often
face
challenges
regarding
accuracy,
early
detection,
and
scalability,
being
invasive,
time-consuming,
prone
to
ambiguous
interpretations.
This
study
proposes
an
advanced
machine
learning
model
designed
enhance
lung
stage
classification
using
CT
scan
images,
aiming
overcome
these
limitations
by
offering
faster,
non-invasive,
reliable
tool.
Utilizing
the
IQ-OTHNCCD
dataset,
comprising
scans
from
various
stages
healthy
individuals,
we
performed
extensive
preprocessing
including
resizing,
normalization,
Gaussian
blurring.
A
Convolutional
Neural
Network
(CNN)
was
then
trained
this
preprocessed
data,
class
imbalance
addressed
Synthetic
Minority
Over-sampling
Technique
(SMOTE).
The
model’s
performance
evaluated
through
metrics
such
as
precision,
recall,
F1-score,
ROC
curve
analysis.
results
demonstrated
accuracy
99.64%,
F1-score
values
exceeding
98%
across
all
categories.
SMOTE
enhanced
ability
classify
underrepresented
classes,
contributing
robustness
These
findings
underscore
potential
in
transforming
diagnostics,
providing
high
classification,
which
could
facilitate
detection
tailored
treatment
strategies,
ultimately
improving
patient
outcomes.
Artificial Intelligence Review,
Journal Year:
2024,
Volume and Issue:
57(8)
Published: July 29, 2024
Abstract
In
healthcare,
medical
practitioners
employ
various
imaging
techniques
such
as
CT,
X-ray,
PET,
and
MRI
to
diagnose
patients,
emphasizing
the
crucial
need
for
early
disease
detection
enhance
survival
rates.
Medical
Image
Analysis
(MIA)
has
undergone
a
transformative
shift
with
integration
of
Artificial
Intelligence
(AI)
Machine
Learning
(ML)
Deep
(DL),
promising
advanced
diagnostics
improved
healthcare
outcomes.
Despite
these
advancements,
comprehensive
understanding
efficiency
metrics,
computational
complexities,
interpretability,
scalability
AI
based
approaches
in
MIA
is
essential
practical
feasibility
real-world
environments.
Existing
studies
exploring
applications
lack
consolidated
review
covering
major
stages
specifically
focused
on
evaluating
approaches.
The
absence
structured
framework
limits
decision-making
researchers,
practitioners,
policymakers
selecting
implementing
optimal
healthcare.
Furthermore,
standardized
evaluation
metrics
complicates
methodology
comparison,
hindering
development
efficient
This
article
addresses
challenges
through
review,
taxonomy,
analysis
existing
AI-based
taxonomy
covers
image
processing
stages,
classifying
each
stage
method
further
analyzing
them
origin,
objective,
method,
dataset,
reveal
their
strengths
weaknesses.
Additionally,
comparative
conducted
evaluate
over
five
publically
available
datasets:
ISIC
2018,
CVC-Clinic,
2018
DSB,
DRIVE,
EM
terms
accuracy,
precision,
Recall,
F-measure,
mIoU,
specificity.
popular
public
datasets
are
briefly
described
analyzed.
resulting
provides
landscape
facilitating
evidence-based
guiding
future
research
efforts
toward
scalable
meet
current
needs.
Nowadays,
Cancer's
devastating
impact
is
growing,
taking
thousands
of
lives
prematurely
each
day.
Lung
cancer
stands
at
the
forefront
this
grim
reality.
Timely
and
accurate
diagnosis
crucial,
as
it
directly
correlates
with
effective
treatment
improved
patient
outcomes.
In
paper,
we
proposed
an
ensemble
deep-learning
method
for
detecting
classifying
lung
cancers
that
greatly
Computer
Aided
Diagnosis
(CAD)
system.
Initially,
three
deep
convolutional
neural
networks
(CNN)
Transfer
Learning
Approaches,
MobileNetV2,
VGG19,
Resnet50,
were
used
individually
to
perform
classification.
Then,
these
models
are
combined
better
in
using
fusion
chest
CT
PET-CT
images.
This
approach
leverages
strengths
ResNet50's
pretrained
weights
feature
extraction,
then
extracted
features
concatenated
classification
through
weighted
average
technique.
After
extensive
experimental
analysis,
model
achieved
a
test
accuracy
98.93%,
which
than
individual
performance
(98.67%
98.20%
97.67%
ResNet50).
It
can
be
efficient
diagnostic
tool
detection,
prediction
results
learning
outperform
recent
approaches.
Scientific Reports,
Journal Year:
2025,
Volume and Issue:
15(1)
Published: May 4, 2025
Lung
cancer
has
been
stated
as
one
of
the
prevalent
killers
up
to
this
present
time
and
clearly
underlines
rationale
for
early
diagnosis
enhance
life
expectancy
patients
afflicted
with
condition.
The
reasons
behind
usage
transformer
deep
learning
classifiers
detection
lung
include
accuracy,
robustness
along
capability
handle
evaluate
large
data
sets
much
more.
Such
models
can
be
more
complex
help
utilize
multiple
modalities
give
extensive
information
that
will
critical
in
ascertaining
right
at
time.
However,
existing
works
encounter
several
limitations
including
reliance
on
annotated
data,
overfitting,
high
computation
complexity,
interpretability.
Third,
issue
stability
these
models'
performance
when
applied
actual
clinical
datasets
is
still
an
open
question;
even
bigger
greatly
reduce
utilization
practice.
To
tackle
these,
we
develop
a
novel
Cancer
Nexus
Synergy
(CanNS),
which
applies
A.
Swin-Transformer
UNet
(SwiNet)
Model
segmentation,
Xception-LSTM
GAN
(XLG)
CancerNet
classification,
Devilish
Levy
Optimization
(DevLO)
fine-tuning
parameters.
This
paper
breaks
new
ground
presented
elements
are
incorporated
manner
co-operatively
elevates
diagnostic
capabilities
while
same
being
computationally
light
resilient.
These
SwiNet
segmented
analysis,
XLG
precise
classification
cases,
DevLO
optimizes
parameters
system,
making
system
sensible
efficient.
outcomes
indicate
CanNS
framework
enhances
detection's
sensitivity,
specificity
compared
previous
approaches.
IEEE Access,
Journal Year:
2024,
Volume and Issue:
12, P. 58573 - 58585
Published: Jan. 1, 2024
In
the
advanced
computer
vision
era,
Convolutional
Neural
Network
(CNN)
plays
a
pivotal
role
in
image
processing,
as
they
excel
at
automatically
extracting
important
patterns,
and
structures,
for
accurate
analysis
across
diverse
domains.
However,
achieving
higher
accuracy
often
leads
to
intensifying
computational
timing
demands.
To
address
challenge,
this
research
introduces
novel
dual
feature
extraction
methodology.
This
approach
is
implemented
using
two
distinct
modules,
employed
different
stages
of
model:
(1)
Edge
Gradient-Dimensionality
Reduction
(EGDR)
module
which
encapsulates
pixel
edge
gradient
features
from
raw
input
frame,
leading
dimensionality
reduction
by
factor
0.5;
(2)
Subtle
Local
Feature
Extraction
(SLFE)
pooling
algorithm
module,
prioritizes
local
subtle
over
maximum
or
average
content.
The
combination
these
proves
particularly
effective
enhancing
classification
while
minimizing
overhead
training
duration.
Subsequently,
comprehensive
training,
validation,
testing
were
conducted
on
selected
multi-class
chest
computed
tomography
medical
dataset
various
state-of-the-art
CNN
architectures
such
VGG-16,
InceptionV3,
ResNet50
identify
most
suitable
model
further
experimentation
with
proposed
method.
CNN-SLFE
framework
EGDR
achieved
significant
17.94%
time
compared
non-EGDR
concurrently
enhanced
an
improvement
1.17
existing
frameworks
module.