Alzheimer s & Dementia,
Journal Year:
2023,
Volume and Issue:
19(5), P. 2135 - 2149
Published: Feb. 3, 2023
Abstract
Introduction
Machine
learning
research
into
automated
dementia
diagnosis
is
becoming
increasingly
popular
but
so
far
has
had
limited
clinical
impact.
A
key
challenge
building
robust
and
generalizable
models
that
generate
decisions
can
be
reliably
explained.
Some
are
designed
to
inherently
“interpretable,”
whereas
post
hoc
“explainability”
methods
used
for
other
models.
Methods
Here
we
sought
summarize
the
state‐of‐the‐art
of
interpretable
machine
dementia.
Results
We
identified
92
studies
using
PubMed,
Web
Science,
Scopus.
Studies
demonstrate
promising
classification
performance
vary
in
their
validation
procedures
reporting
standards
rely
heavily
on
data
sets.
Discussion
Future
work
should
incorporate
clinicians
validate
explanation
make
conclusive
inferences
about
dementia‐related
disease
pathology.
Critically
analyzing
model
explanations
also
requires
an
understanding
interpretability
itself.
Patient‐specific
required
benefit
practice.
The Lancet Digital Health,
Journal Year:
2021,
Volume and Issue:
3(11), P. e745 - e750
Published: Oct. 25, 2021
The
black-box
nature
of
current
artificial
intelligence
(AI)
has
caused
some
to
question
whether
AI
must
be
explainable
used
in
high-stakes
scenarios
such
as
medicine.
It
been
argued
that
will
engender
trust
with
the
health-care
workforce,
provide
transparency
into
decision
making
process,
and
potentially
mitigate
various
kinds
bias.
In
this
Viewpoint,
we
argue
argument
represents
a
false
hope
for
explainability
methods
are
unlikely
achieve
these
goals
patient-level
support.
We
an
overview
techniques
highlight
how
failure
cases
can
cause
problems
individual
patients.
absence
suitable
methods,
advocate
rigorous
internal
external
validation
models
more
direct
means
achieving
often
associated
explainability,
caution
against
having
requirement
clinically
deployed
models.
Medical Image Analysis,
Journal Year:
2022,
Volume and Issue:
79, P. 102470 - 102470
Published: May 4, 2022
With
an
increase
in
deep
learning-based
methods,
the
call
for
explainability
of
such
methods
grows,
especially
high-stakes
decision
making
areas
as
medical
image
analysis.
This
survey
presents
overview
eXplainable
Artificial
Intelligence
(XAI)
used
A
framework
XAI
criteria
is
introduced
to
classify
analysis
methods.
Papers
on
techniques
are
then
surveyed
and
categorized
according
anatomical
location.
The
paper
concludes
with
outlook
future
opportunities
Canadian Journal of Cardiology,
Journal Year:
2021,
Volume and Issue:
38(2), P. 204 - 213
Published: Sept. 14, 2021
Many
clinicians
remain
wary
of
machine
learning
because
longstanding
concerns
about
“black
box”
models.
“Black
is
shorthand
for
models
that
are
sufficiently
complex
they
not
straightforwardly
interpretable
to
humans.
Lack
interpretability
in
predictive
can
undermine
trust
those
models,
especially
health
care,
which
so
many
decisions
are—
literally—life
and
death
issues.
There
has
been
a
recent
explosion
research
the
field
explainable
aimed
at
addressing
these
concerns.
The
promise
considerable,
but
it
important
cardiologists
who
may
encounter
techniques
clinical
decision-support
tools
or
novel
papers
have
critical
understanding
both
their
strengths
limitations.
This
paper
reviews
key
concepts
as
apply
cardiology.
Key
reviewed
include
vs
explainability
global
local
explanations.
Techniques
demonstrated
permutation
importance,
surrogate
decision
trees,
model-agnostic
explanations,
partial
dependence
plots.
We
discuss
several
limitations
with
techniques,
focusing
on
how
nature
explanations
approximations
omit
information
black-box
work
why
make
certain
predictions.
conclude
by
proposing
rule
thumb
when
appropriate
use
black-
box
rather
than
Knowledge-Based Systems,
Journal Year:
2023,
Volume and Issue:
263, P. 110273 - 110273
Published: Jan. 11, 2023
learning
Deep
Meta-survey
Responsible
AI
a
b
s
t
r
c
tThe
past
decade
has
seen
significant
progress
in
artificial
intelligence
(AI),
which
resulted
algorithms
being
adopted
for
resolving
variety
of
problems.However,
this
success
been
met
by
increasing
model
complexity
and
employing
black-box
models
that
lack
transparency.In
response
to
need,
Explainable
(XAI)
proposed
make
more
transparent
thus
advance
the
adoption
critical
domains.Although
there
are
several
reviews
XAI
topics
literature
have
identified
challenges
potential
research
directions
XAI,
these
scattered.This
study,
hence,
presents
systematic
meta-survey
future
organized
two
themes:
(1)
general
(2)
based
on
machine
life
cycle's
phases:
design,
development,
deployment.We
believe
our
contributes
providing
guide
exploration
area.
Cancer Cell,
Journal Year:
2022,
Volume and Issue:
40(10), P. 1095 - 1110
Published: Oct. 1, 2022
In
oncology,
the
patient
state
is
characterized
by
a
whole
spectrum
of
modalities,
ranging
from
radiology,
histology,
and
genomics
to
electronic
health
records.
Current
artificial
intelligence
(AI)
models
operate
mainly
in
realm
single
modality,
neglecting
broader
clinical
context,
which
inevitably
diminishes
their
potential.
Integration
different
data
modalities
provides
opportunities
increase
robustness
accuracy
diagnostic
prognostic
models,
bringing
AI
closer
practice.
are
also
capable
discovering
novel
patterns
within
across
suitable
for
explaining
differences
outcomes
or
treatment
resistance.
The
insights
gleaned
such
can
guide
exploration
studies
contribute
discovery
biomarkers
therapeutic
targets.
To
support
these
advances,
here
we
present
synopsis
methods
strategies
multimodal
fusion
association
discovery.
We
outline
approaches
interpretability
directions
AI-driven
through
interconnections.
examine
challenges
adoption
discuss
emerging
solutions.
Journal of Healthcare Engineering,
Journal Year:
2022,
Volume and Issue:
2022, P. 1 - 16
Published: April 15, 2022
Deep
learning
has
been
extensively
applied
to
segmentation
in
medical
imaging.
U-Net
proposed
2015
shows
the
advantages
of
accurate
small
targets
and
its
scalable
network
architecture.
With
increasing
requirements
for
performance
imaging
recent
years,
cited
academically
more
than
2500
times.
Many
scholars
have
constantly
developing
This
paper
summarizes
image
technologies
based
on
structure
variants
concerning
their
structure,
innovation,
efficiency,
etc.;
reviews
categorizes
related
methodology;
introduces
loss
functions,
evaluation
parameters,
modules
commonly
imaging,
which
will
provide
a
good
reference
future
research.