Sensors International,
Journal Year:
2023,
Volume and Issue:
4, P. 100229 - 100229
Published: Jan. 1, 2023
The
novel
coronavirus
is
the
new
member
of
SARS
family,
which
can
cause
mild
to
severe
infection
in
lungs
and
other
vital
organs
like
heart,
kidney
liver.
For
detecting
COVID-19
from
images,
traditional
ANN
be
employed.
This
method
begins
by
extracting
features
then
feeding
into
a
suitable
classifier.
classification
rate
not
so
high
as
feature
extraction
dependent
on
experimenters'
expertise.
To
solve
this
drawback,
hybrid
CNN-KNN-based
model
with
5-fold
cross-validation
proposed
classify
covid-19
or
non-covid19
CT
scans
patients.
At
first,
some
pre-processing
steps
contrast
enhancement,
median
filtering,
data
augmentation,
image
resizing
are
performed.
Secondly,
entire
dataset
divided
five
equal
sections
folds
for
training
testing.
By
doing
cross-validation,
generalization
ensured
overfitting
network
prevented.
CNN
consists
four
convolutional
layers,
max-pooling
two
fully
connected
layers
combined
23
layers.
architecture
used
extractor
case.
taken
model's
fourth
layer
finally,
classified
using
K
Nearest
Neighbor
rather
than
softmax
better
accuracy.
conducted
over
an
augmented
4085
scan
images.
average
accuracy,
precision,
recall
F1
score
after
performing
98.26%,
99.42%,97.2%
98.19%,
respectively.
method's
accuracy
comparable
existing
works
described
further,
where
state
art
custom
models
were
used.
Hence,
diagnose
patients
higher
efficiency.
Talanta,
Journal Year:
2022,
Volume and Issue:
244, P. 123409 - 123409
Published: April 1, 2022
More
than
six
billion
tests
for
COVID-19
has
been
already
performed
in
the
world.
The
testing
SARS-CoV-2
(Severe
Acute
Respiratory
Syndrome
Coronavirus-2)
virus
and
corresponding
human
antibodies
is
essential
not
only
diagnostics
treatment
of
infection
by
medical
institutions,
but
also
as
a
pre-requisite
major
semi-normal
economic
social
activities
such
international
flights,
off
line
work
study
offices,
access
to
malls,
sport
events.
Accuracy,
sensitivity,
specificity,
time
results
cost
per
test
are
parameters
those
even
minimal
improvement
any
them
may
have
noticeable
impact
on
life
many
countries
We
described,
analyzed
compared
methods
detection,
while
representing
their
22
tables.
Also,
we
performance
some
FDA
approved
kits
with
clinical
non-FDA
just
described
scientific
literature.
RT-PCR
still
remains
golden
standard
detection
virus,
pressing
need
alternative
less
expensive,
more
rapid,
point
care
evident.
Those
that
eventually
get
developed
satisfy
this
explained,
discussed,
quantitatively
compared.
review
bioanalytical
chemistry
prospective,
it
be
interesting
broader
circle
readers
who
interested
understanding
testing,
helping
leave
pandemic
past.
Computers in Biology and Medicine,
Journal Year:
2023,
Volume and Issue:
156, P. 106668 - 106668
Published: Feb. 20, 2023
Artificial
Intelligence
(AI)
techniques
of
deep
learning
have
revolutionized
the
disease
diagnosis
with
their
outstanding
image
classification
performance.
In
spite
results,
widespread
adoption
these
in
clinical
practice
is
still
taking
place
at
a
moderate
pace.
One
major
hindrance
that
trained
Deep
Neural
Networks
(DNN)
model
provides
prediction,
but
questions
about
why
and
how
prediction
was
made
remain
unanswered.
This
linkage
utmost
importance
for
regulated
healthcare
domain
to
increase
trust
automated
system
by
practitioners,
patients
other
stakeholders.
The
application
medical
imaging
has
be
interpreted
caution
due
health
safety
concerns
similar
blame
attribution
case
an
accident
involving
autonomous
cars.
consequences
both
false
positive
negative
cases
are
far
reaching
patients'
welfare
cannot
ignored.
exacerbated
fact
state-of-the-art
algorithms
comprise
complex
interconnected
structures,
millions
parameters,
'black
box'
nature,
offering
little
understanding
inner
working
unlike
traditional
machine
algorithms.
Explainable
AI
(XAI)
help
understand
predictions
which
develop
system,
accelerate
diagnosis,
meet
adherence
regulatory
requirements.
survey
comprehensive
review
promising
field
XAI
biomedical
diagnostics.
We
also
provide
categorization
techniques,
discuss
open
challenges,
future
directions
would
interest
clinicians,
regulators
developers.
Informatics in Medicine Unlocked,
Journal Year:
2023,
Volume and Issue:
40, P. 101286 - 101286
Published: Jan. 1, 2023
This
paper
investigates
the
applications
of
explainable
AI
(XAI)
in
healthcare,
which
aims
to
provide
transparency,
fairness,
accuracy,
generality,
and
comprehensibility
results
obtained
from
ML
algorithms
decision-making
systems.
The
black
box
nature
systems
has
remained
a
challenge
interpretable
techniques
can
potentially
address
this
issue.
Here
we
critically
review
previous
studies
related
interpretability
methods
medical
Descriptions
various
types
XAI
such
as
layer-wise
relevance
propagation
(LRP),
Uniform
Manifold
Approximation
Projection
(UMAP),
Local
Interpretable
Model-agnostic
Explanations
(LIME),
SHapley
Additive
exPlanations
(SHAP),
ANCHOR,
contextual
importance
utility
(CIU),
Training
calibration-based
explainers
(TraCE),
Gradient-weighted
Class
Activation
Mapping
(Grad-CAM),
t-distributed
Stochastic
Neighbor
Embedding
(t-SNE),
NeuroXAI,
Explainable
Cumulative
Fuzzy
Membership
Criterion
(X-CFCMC)
along
with
diseases
be
explained
through
these
are
provided
throughout
paper.
also
discusses
how
technologies
transform
healthcare
services.
usability
reliability
presented
summarized,
including
on
XGBoost
for
mediastinal
cysts
tumors,
3D
brain
tumor
segmentation
network,
TraCE
method
image
analysis.
Overall,
contribute
growing
field
insights
researchers,
practitioners,
decision-makers
industry.
Finally,
discuss
performance
applied
health
care
It
is
needed
mention
that
brief
implemented
methodology
section.
IEEE Reviews in Biomedical Engineering,
Journal Year:
2022,
Volume and Issue:
16, P. 5 - 21
Published: June 23, 2022
Despite
the
myriad
peer-reviewed
papers
demonstrating
novel
Artificial
Intelligence
(AI)-based
solutions
to
COVID-19
challenges
during
pandemic,
few
have
made
a
significant
clinical
impact,
especially
in
diagnosis
and
disease
precision
staging.
One
major
cause
for
such
low
impact
is
lack
of
model
transparency,
significantly
limiting
AI
adoption
real
practice.
To
solve
this
problem,
models
need
be
explained
users.
Thus,
we
conducted
comprehensive
study
Explainable
(XAI)
using
PRISMA
technology.
Our
findings
suggest
that
XAI
can
improve
performance,
instill
trust
users,
assist
users
decision-making.
In
systematic
review,
introduce
common
techniques
their
utility
with
specific
examples
application.
We
discuss
evaluation
results
because
it
an
important
step
maximizing
value
AI-based
decision
support
systems.
Additionally,
present
traditional,
modern,
advanced
demonstrate
evolution
techniques.
Finally,
provide
best
practice
guideline
developers
refer
experimentation.
also
offer
potential
This
hopefully,
promote
biomedicine
healthcare.
Multimedia Tools and Applications,
Journal Year:
2023,
Volume and Issue:
83(2), P. 5893 - 5927
Published: May 29, 2023
Abstract
Deep
learning
(DL)
is
becoming
a
fast-growing
field
in
the
medical
domain
and
it
helps
timely
detection
of
any
infectious
disease
(IDs)
essential
to
management
diseases
prediction
future
occurrences.
Many
scientists
scholars
have
implemented
DL
techniques
for
pandemics,
IDs
other
healthcare-related
purposes,
these
outcomes
are
with
various
limitations
research
gaps.
For
purpose
achieving
an
accurate,
efficient
less
complicated
DL-based
system
therefore,
this
study
carried
out
systematic
literature
review
(SLR)
on
pandemics
using
techniques.
The
survey
anchored
by
four
objectives
state-of-the-art
forty-five
papers
seven
hundred
ninety
retrieved
from
different
scholarly
databases
was
analyze
evaluate
trend
application
areas
pandemics.
This
used
tables
graphs
extracted
related
articles
online
repositories
analysis
showed
that
good
tool
pandemic
prediction.
Scopus
Web
Science
given
attention
current
because
they
contain
suitable
scientific
findings
subject
area.
Finally,
presents
forty-four
(44)
studies
technique
performances.
challenges
identified
include
low
performance
model
due
computational
complexities,
improper
labeling
absence
high-quality
dataset
among
others.
suggests
possible
solutions
such
as
development
improved
or
reduction
output
layer
architecture
pandemic-prone
considerations.