Explainable
artificial
intelligence
(XAI)
has
become
a
significant
approach
for
increasing
trust
in
techniques
used
by
the
machine
learning
community.
Similarly,
given
importance
of
applications
metaheuristics,
often
optimisation
critical
national
infrastructure
such
as
power
generation
facilities,
it
is
important
that
trustworthiness
tools
optimising
these
problems
assured,
and
use
XAI
within
metaheuristic
domain
one
way
achieving
this.
This
chapter
considers
application
tool
previously
demonstrated
on
knapsack
to
offshore
wind
farm
layouts
discusses
extent
which
able
explain
processes
identify
optimal
designs.
IEEE Transactions on Evolutionary Computation,
Journal Year:
2022,
Volume and Issue:
27(1), P. 5 - 25
Published: Nov. 9, 2022
Computer
vision
(CV)
is
a
big
and
important
field
in
artificial
intelligence
covering
wide
range
of
applications.
Image
analysis
major
task
CV
aiming
to
extract,
analyze
understand
the
visual
content
images.
However,
image-related
tasks
are
very
challenging
due
many
factors,
e.g.,
high
variations
across
images,
dimensionality,
domain
expertise
requirement,
image
distortions.
Evolutionary
computation
(EC)
approaches
have
been
widely
used
for
with
significant
achievement.
there
no
comprehensive
survey
existing
EC
analysis.
To
fill
this
gap,
article
provides
all
essential
tasks,
including
edge
detection,
segmentation,
feature
analysis,
classification,
object
others.
This
aims
provide
better
understanding
evolutionary
(ECV)
by
discussing
contributions
different
exploring
how
why
The
applications,
challenges,
issues,
trends
associated
research
also
discussed
summarized
further
guidelines
opportunities
future
research.
Neural Networks,
Journal Year:
2024,
Volume and Issue:
177, P. 106392 - 106392
Published: May 15, 2024
Explainable
artificial
intelligence
(XAI)
has
been
increasingly
investigated
to
enhance
the
transparency
of
black-box
models,
promoting
better
user
understanding
and
trust.
Developing
an
XAI
that
is
faithful
models
plausible
users
both
a
necessity
challenge.
This
work
examines
whether
embedding
human
attention
knowledge
into
saliency-based
methods
for
computer
vision
could
their
plausibility
faithfulness.
Two
novel
object
detection
namely
FullGrad-CAM
FullGrad-CAM++,
were
first
developed
generate
object-specific
explanations
by
extending
current
gradient-based
image
classification
models.
Using
as
objective
measure,
these
achieve
higher
explanation
plausibility.
Interestingly,
all
when
applied
generally
produce
saliency
maps
are
less
model
than
from
same
task.
Accordingly,
attention-guided
(HAG-XAI)
was
proposed
learn
how
best
combine
explanatory
information
using
trainable
activation
functions
smoothing
kernels
maximize
similarity
between
map
map.
The
evaluated
on
widely
used
BDD-100K,
MS-COCO,
ImageNet
datasets
compared
with
typical
perturbation-based
methods.
Results
suggest
HAG-XAI
enhanced
trust
at
expense
faithfulness
it
plausibility,
faithfulness,
simultaneously
outperformed
existing
state-of-the-art
ACM Transactions on Evolutionary Learning and Optimization,
Journal Year:
2024,
Volume and Issue:
4(1), P. 1 - 30
Published: Jan. 30, 2024
Interpretability
is
a
critical
aspect
to
ensure
fair
and
responsible
use
of
machine
learning
(ML)
in
high-stakes
applications.
Genetic
programming
(GP)
has
been
used
obtain
interpretable
ML
models
because
it
operates
at
the
level
functional
building
blocks:
if
these
blocks
are
interpretable,
there
chance
that
their
composition
(i.e.,
entire
model)
also
interpretable.
However,
degree
which
model
depends
on
observer.
Motivated
by
this,
we
study
recently-introduced
human-in-the-loop
system
allows
user
steer
GP’s
generation
process
preferences,
shall
be
online-learned
an
artificial
neural
network
(ANN).
We
focus
as
analytical
functions
symbolic
regression)
this
key
problem
ML,
propose
two-fold
contribution.
First,
devise
more
general
representations
for
ANN
learn
upon,
enable
application
wider
range
problems.
Second,
delve
into
deeper
analysis
system’s
components.
To
end,
incremental
experimental
evaluation,
aimed
(1)
studying
effectiveness
can
capture
perceived
interpretability
simulated
users,
(2)
investigating
how
outcome
affected
across
different
feedback
profiles,
(3)
determining
whether
humans
participants
would
prefer
were
generated
with
or
without
involvement.
Our
results
pose
clarity
pros
cons
using
approach
discover
GP.