AI,
Journal Year:
2023,
Volume and Issue:
4(3), P. 620 - 651
Published: Aug. 1, 2023
Explainable
Artificial
Intelligence
(XAI)
has
emerged
as
a
crucial
research
area
to
address
the
interpretability
challenges
posed
by
complex
machine
learning
models.
In
this
survey
paper,
we
provide
comprehensive
analysis
of
existing
approaches
in
field
XAI,
focusing
on
tradeoff
between
model
accuracy
and
interpretability.
Motivated
need
tradeoff,
conduct
an
extensive
review
literature,
presenting
multi-view
taxonomy
that
offers
new
perspective
XAI
methodologies.
We
analyze
various
sub-categories
methods,
considering
their
strengths,
weaknesses,
practical
challenges.
Moreover,
explore
causal
relationships
explanations
discuss
dedicated
explaining
cross-domain
classifiers.
The
latter
is
particularly
important
scenarios
where
training
test
data
are
sampled
from
different
distributions.
Drawing
insights
our
analysis,
propose
future
directions,
including
exploring
explainable
allied
paradigms,
developing
evaluation
metrics
for
both
traditionally
trained
learning-based
classifiers,
applying
neural
architectural
search
techniques
minimize
accuracy–interpretability
tradeoff.
This
paper
provides
overview
state-of-the-art
serving
valuable
resource
researchers
practitioners
interested
understanding
advancing
field.
Management Decision,
Journal Year:
2024,
Volume and Issue:
unknown
Published: June 12, 2024
Purpose
This
study
investigates
the
profound
impact
of
artificial
intelligence
(AI)
capabilities
on
decision-making
processes
and
organizational
performance,
addressing
a
crucial
gap
in
literature
by
exploring
mediating
role
speed
quality.
Design/methodology/approach
Drawing
upon
resource-based
theory
prior
research,
this
constructs
comprehensive
model
hypotheses
to
illuminate
influence
AI
within
organizations
speed,
decision
quality,
and,
ultimately,
performance.
A
dataset
comprising
230
responses
from
diverse
forms
basis
analysis,
with
employing
partial
least
squares
structural
equation
(PLS-SEM)
for
robust
data
examination.
Findings
The
results
demonstrate
pivotal
shaping
capability
significantly
positively
affects
overall
Notably,
is
critical
factor
contributing
enhanced
further
uncovered
mediation
effects,
suggesting
that
partially
mediate
relationship
between
performance
through
speed.
Originality/value
contributes
existing
body
providing
empirical
evidence
multifaceted
Elucidating
advances
our
understanding
complex
mechanisms
which
drive
success.
npj 2D Materials and Applications,
Journal Year:
2025,
Volume and Issue:
9(1)
Published: Feb. 1, 2025
MXenes
are
a
versatile
family
of
2D
inorganic
materials
with
applications
in
energy
storage,
shielding,
sensing,
and
catalysis.
This
review
highlights
computational
studies
using
density
functional
theory
machine-learning
approaches
to
explore
their
structure
(stacking,
functionalization,
doping),
properties
(electronic,
mechanical,
magnetic),
application
potential.
Key
advances
challenges
critically
examined,
offering
insights
into
applying
research
transition
these
from
the
lab
practical
use.
Scientific Reports,
Journal Year:
2025,
Volume and Issue:
15(1)
Published: Jan. 28, 2025
Abstract
Glaucoma
poses
a
growing
health
challenge
projected
to
escalate
in
the
coming
decades.
However,
current
automated
diagnostic
approaches
on
diagnosis
solely
rely
black-box
deep
learning
models,
lacking
explainability
and
trustworthiness.
To
address
issue,
this
study
uses
optical
coherence
tomography
(OCT)
images
develop
an
explainable
artificial
intelligence
(XAI)
tool
for
diagnosing
staging
glaucoma,
with
focus
its
clinical
applicability.
A
total
of
334
normal
268
glaucomatous
eyes
(86
early,
72
moderate,
110
advanced)
were
included,
signal
processing
theory
was
employed,
model
interpretability
rigorously
evaluated.
Leveraging
SHapley
Additive
exPlanations
(SHAP)-based
global
feature
ranking
partial
dependency
analysis
(PDA)
estimated
decision
boundary
cut-offs
machine
(ML)
novel
algorithm
developed
implement
XAI
tool.
Using
selected
features,
ML
models
produce
AUC
0.96
(95%
CI:
0.95–0.98),
0.98
0.96–1.00)
1.00
1.00–1.00)
respectively
differentiating
moderate
advanced
glaucoma
patients.
Overall,
outperformed
clinicians
early
stage
overall
10.4
–11.2%
higher
accuracy.
The
user-friendly
software
shows
potential
as
valuable
eye
care
practitioners,
offering
transparent
interpretable
insights
improve
decision-making.
Electronic Markets,
Journal Year:
2022,
Volume and Issue:
32(4), P. 2079 - 2102
Published: Oct. 23, 2022
Abstract
Contemporary
decision
support
systems
are
increasingly
relying
on
artificial
intelligence
technology
such
as
machine
learning
algorithms
to
form
intelligent
systems.
These
have
human-like
capacity
for
selected
applications
based
a
rationale
which
cannot
be
looked-up
conveniently
and
constitutes
black
box.
As
consequence,
acceptance
by
end-users
remains
somewhat
hesitant.
While
lacking
transparency
has
been
said
hinder
trust
enforce
aversion
towards
these
systems,
studies
that
connect
user
subsequently
scarce.
In
response,
our
research
is
concerned
with
the
development
of
theoretical
model
explains
end-user
We
utilize
unified
theory
use
in
information
well
explanation
related
theories
initial
The
proposed
tested
an
industrial
maintenance
workplace
scenario
using
experts
participants
represent
group.
Results
show
performance-driven
at
first
sight.
However,
plays
important
indirect
role
regulating
perception
performance.
International Journal of Intelligent Systems,
Journal Year:
2023,
Volume and Issue:
2023, P. 1 - 41
Published: Oct. 26, 2023
Given
the
tremendous
potential
and
influence
of
artificial
intelligence
(AI)
algorithmic
decision-making
(DM),
these
systems
have
found
wide-ranging
applications
across
diverse
fields,
including
education,
business,
healthcare
industries,
government,
justice
sectors.
While
AI
DM
offer
significant
benefits,
they
also
carry
risk
unfavourable
outcomes
for
users
society.
As
a
result,
ensuring
safety,
reliability,
trustworthiness
becomes
crucial.
This
article
aims
to
provide
comprehensive
review
synergy
between
DM,
focussing
on
importance
trustworthiness.
The
addresses
following
four
key
questions,
guiding
readers
towards
deeper
understanding
this
topic:
(i)
why
do
we
need
trustworthy
AI?
(ii)
what
are
requirements
In
line
with
second
question,
that
establish
been
explained,
explainability,
accountability,
robustness,
fairness,
acceptance
AI,
privacy,
accuracy,
reproducibility,
human
agency,
oversight.
(iii)
how
can
data?
(iv)
priorities
in
terms
challenging
applications?
Regarding
last
six
different
discussed,
environmental
science,
5G-based
IoT
networks,
robotics
architecture,
engineering
construction,
financial
technology,
healthcare.
emphasises
address
before
their
deployment
order
achieve
goal
good.
An
example
is
provided
demonstrates
be
employed
eliminate
bias
resources
management
systems.
insights
recommendations
presented
paper
will
serve
as
valuable
guide
researchers
seeking
applications.
Nature Communications,
Journal Year:
2024,
Volume and Issue:
15(1)
Published: Jan. 13, 2024
Abstract
Antimicrobial
resistance
(AMR)
and
healthcare
associated
infections
pose
a
significant
threat
globally.
One
key
prevention
strategy
is
to
follow
antimicrobial
stewardship
practices,
in
particular,
maximise
targeted
oral
therapy
reduce
the
use
of
indwelling
vascular
devices
for
intravenous
(IV)
administration.
Appreciating
when
an
individual
patient
can
switch
from
IV
antibiotic
treatment
often
non-trivial
not
standardised.
To
tackle
this
problem
we
created
machine
learning
model
predict
could
based
on
routinely
collected
clinical
parameters.
10,362
unique
intensive
care
unit
stays
were
extracted
two
informative
feature
sets
identified.
Our
best
achieved
mean
AUROC
0.80
(SD
0.01)
hold-out
set
while
being
biased
individuals
protected
characteristics.
Interpretability
methodologies
employed
create
clinically
useful
visual
explanations.
In
summary,
our
provides
individualised,
fair,
interpretable
predictions
IV-to-oral
treatment.
Prospectively
evaluation
safety
efficacy
needed
before
such
technology
be
applied
clinically.