Decision Analytics Journal,
Journal Year:
2023,
Volume and Issue:
7, P. 100230 - 100230
Published: April 17, 2023
Artificial
Intelligence
(AI)
uses
systems
and
machines
to
simulate
human
intelligence
solve
common
real-world
problems.
Machine
learning
deep
are
technologies
that
use
algorithms
predict
outcomes
more
accurately
without
relying
on
intervention.
However,
the
opaque
black
box
model
cumulative
complexity
can
be
used
achieve.
Explainable
(XAI)
is
a
term
refers
provide
explanations
for
their
decision
or
predictions
users.
XAI
aims
increase
transparency,
trustworthiness
accountability
of
AI
system,
especially
when
they
high-stakes
application
such
as
healthcare,
finance
security.
This
paper
offers
systematic
literature
review
approaches
with
different
observes
91
recently
published
articles
describing
development
applications
in
manufacturing,
transportation,
finance.
We
investigated
Scopus,
Web
Science,
IEEE
Xplore
PubMed
databases,
find
pertinent
publications
between
January
2018
October
2022.
It
contains
research
modelling
were
retrieved
from
scholarly
databases
using
keyword
searches.
think
our
extends
by
working
roadmap
further
field.
Computers in Biology and Medicine,
Journal Year:
2022,
Volume and Issue:
149, P. 106043 - 106043
Published: Sept. 7, 2022
With
the
advent
of
machine
learning
(ML)
and
deep
(DL)
empowered
applications
for
critical
like
healthcare,
questions
about
liability,
trust,
interpretability
their
outputs
are
raising.
The
black-box
nature
various
DL
models
is
a
roadblock
to
clinical
utilization.
Therefore,
gain
trust
clinicians
patients,
we
need
provide
explanations
decisions
models.
promise
enhancing
transparency
models,
researchers
in
phase
maturing
field
eXplainable
ML
(XML).
In
this
paper,
provided
comprehensive
review
explainable
interpretable
techniques
healthcare
applications.
Along
with
highlighting
security,
safety,
robustness
challenges
that
hinder
trustworthiness
ML,
also
discussed
ethical
issues
arising
because
use
ML/DL
healthcare.
We
describe
how
trustworthy
can
resolve
all
these
problems.
Finally,
elaborate
on
limitations
existing
approaches
highlight
open
research
problems
require
further
development.
Frontiers in Artificial Intelligence,
Journal Year:
2022,
Volume and Issue:
5
Published: May 30, 2022
The
lack
of
transparency
is
one
the
artificial
intelligence
(AI)'s
fundamental
challenges,
but
concept
might
be
even
more
opaque
than
AI
itself.
Researchers
in
different
fields
who
attempt
to
provide
solutions
improve
AI's
articulate
neighboring
concepts
that
include,
besides
transparency,
explainability
and
interpretability.
Yet,
there
no
common
taxonomy
neither
within
field
(such
as
data
science)
nor
between
(law
science).
In
certain
areas
like
healthcare,
requirements
are
crucial
since
decisions
directly
affect
people's
lives.
this
paper,
we
suggest
an
interdisciplinary
vision
on
how
tackle
issue
propose
a
single
point
reference
for
both
legal
scholars
scientists
related
concepts.
Based
analysis
European
Union
(EU)
legislation
literature
computer
science,
submit
shall
considered
"way
thinking"
umbrella
characterizing
process
development
use.
Transparency
achieved
through
set
measures
such
interpretability
explainability,
communication,
auditability,
traceability,
information
provision,
record-keeping,
governance
management,
documentation.
This
approach
deal
with
general
nature,
always
contextualized.
By
analyzing
healthcare
context,
it
viewed
system
accountabilities
involved
subjects
(AI
developers,
professionals,
patients)
distributed
at
layers
(insider,
internal,
external
layers,
respectively).
transparency-related
built-in
into
existing
accountability
picture
which
justifies
need
investigate
relevant
frameworks.
These
frameworks
correspond
system.
requirement
informed
medical
consent
correlates
layer
Medical
Devices
Framework
insider
internal
layers.
We
said
inform
developers
what
already
expected
from
them
regards
transparency.
also
discover
gaps
legislative
concerning
fill
in.
Military Medical Research,
Journal Year:
2023,
Volume and Issue:
10(1)
Published: May 16, 2023
Modern
medicine
is
reliant
on
various
medical
imaging
technologies
for
non-invasively
observing
patients'
anatomy.
However,
the
interpretation
of
images
can
be
highly
subjective
and
dependent
expertise
clinicians.
Moreover,
some
potentially
useful
quantitative
information
in
images,
especially
that
which
not
visible
to
naked
eye,
often
ignored
during
clinical
practice.
In
contrast,
radiomics
performs
high-throughput
feature
extraction
from
enables
analysis
prediction
endpoints.
Studies
have
reported
exhibits
promising
performance
diagnosis
predicting
treatment
responses
prognosis,
demonstrating
its
potential
a
non-invasive
auxiliary
tool
personalized
medicine.
remains
developmental
phase
as
numerous
technical
challenges
yet
solved,
engineering
statistical
modeling.
this
review,
we
introduce
current
utility
by
summarizing
research
application
diagnosis,
patients
with
cancer.
We
focus
machine
learning
approaches,
selection
imbalanced
datasets
multi-modality
fusion
Furthermore,
stability,
reproducibility,
interpretability
features,
generalizability
models.
Finally,
offer
possible
solutions
research.
Digital Health,
Journal Year:
2023,
Volume and Issue:
9
Published: Jan. 1, 2023
Objective
Artificial
intelligence
(AI)
has
been
increasingly
applied
in
various
fields
of
science
and
technology.
In
line
with
the
current
research,
medicine
involves
an
increasing
number
artificial
technologies.
The
introduction
rapid
AI
can
lead
to
positive
negative
effects.
This
is
a
multilateral
analytical
literature
review
aimed
at
identifying
main
branches
trends
use
using
medical
Methods
total
sources
reviewed
n
=
89,
they
are
analyzed
based
on
reporting
evidence-based
guideline
PRISMA
(Preferred
Reporting
Items
for
Systematic
Reviews
Meta-Analyses)
systematic
review.
Results
As
result,
from
initially
selected
198
references,
155
references
were
obtained
databases
remaining
43
found
open
internet
as
direct
links
publications.
Finally,
89
evaluated
after
exclusion
unsuitable
duplicated
generalized
information
without
focusing
users.
Conclusions
article
state
prospects
future
use.
findings
this
will
be
useful
healthcare
professionals
improving
circulation
design
implementation
stage.
Energy and AI,
Journal Year:
2024,
Volume and Issue:
16, P. 100358 - 100358
Published: March 12, 2024
Electric
Load
Forecasting
(ELF)
is
the
central
instrument
for
planning
and
controlling
demand
response
programs,
electricity
trading,
consumption
optimization.
Due
to
increasing
automation
of
these
processes,
meaningful
transparent
forecasts
become
more
important.
Still,
at
same
time,
complexity
used
machine
learning
models
architectures
increases.
Because
there
an
interest
in
interpretable
explainable
load
forecasting
methods,
this
work
conducts
a
literature
review
present
already
applied
approaches
regarding
explainability
interpretability
using
Machine
Learning.
Based
on
extensive
research
covering
eight
publication
portals,
recurring
modeling
approaches,
trends,
techniques
are
identified
clustered
by
properties
achieve
forecasts.
The
results
show
increase
use
probabilistic
models,
methods
time
series
decomposition
fuzzy
logic
addition
classically
models.
Dominant
Feature
Importance
Attention
mechanisms.
discussion
shows
that
lot
knowledge
from
related
field
still
needs
be
adapted
problems
ELF.
Compared
other
applications
such
as
clustering,
currently
relatively
few
results,
but
with
trend.