In
this
work
We
consider
and
discuss
the
problems
which
come
with
trying
to
explain
human
machine
intelligence.How
explainable
artificial
intelligence
research
is
being
carried
out,
pitfalls
limitations
of
current
approaches
bigger
question
whether
we
need
explanations
for
trusting
inherently
complex
large
intelligent
systems,
or
not.
Proceedings of the Genetic and Evolutionary Computation Conference Companion,
Год журнала:
2022,
Номер
unknown, С. 1757 - 1762
Опубликована: Июль 9, 2022
In
the
past
decade,
Explainable
Artificial
Intelligence
(XAI)
has
attracted
a
great
interest
in
research
community,
motivated
by
need
for
explanations
critical
AI
applications.
Some
recent
advances
XAI
are
based
on
Evolutionary
Computation
(EC)
techniques,
such
as
Genetic
Programming.
We
call
this
trend
EC
XAI.
argue
that
full
potential
of
methods
not
been
fully
exploited
yet
XAI,
and
community
future
efforts
field.
Likewise,
we
find
there
is
growing
concern
regarding
explanation
population-based
methods,
i.e.,
their
search
process
outcomes.
While
some
attempts
have
done
direction
(although,
most
cases,
those
explicitly
put
context
XAI),
believe
still
several
opportunities
open
questions
that,
principle,
may
promote
safer
broader
adoption
real-world
within
EC.
position
paper,
briefly
overview
main
results
two
above
trends,
suggest
play
major
role
achievement
tutorial
Free
Access
Share
on
Exploratory
Landscape
Analysis
Authors:
Pascal
Kerschke
"Friedrich
List"
Faculty
of
Transport
and
Traffic
Sciences,
TU
Dresden,
Germany
Center
for
Scalable
Data
Science
(ScaDS.AI),
Dresden/Leipzig,
https://orcid.org/0000-0003-2862-1418Search
about
this
author
,
Mike
Preuss
LIACS,
University
Leiden,
Netherlands
https://orcid.org/0000-0003-4681-1346Search
Authors
Info
&
Claims
GECCO
'23
Companion:
Proceedings
the
Companion
Conference
Genetic
Evolutionary
ComputationJuly
2023Pages
990–1007https://doi.org/10.1145/3583133.3595058Published:24
July
2023Publication
History
0citation0DownloadsMetricsTotal
Citations0Total
Downloads0Last
12
Months0Last
6
weeks0
Get
Citation
AlertsNew
Alert
added!This
alert
has
been
successfully
added
will
be
sent
to:You
notified
whenever
a
record
that
you
have
chosen
cited.To
manage
your
preferences,
click
button
below.Manage
my
Alert!Please
log
in
to
account
Save
BinderSave
BinderCreate
New
BinderNameCancelCreateExport
CitationPublisher
SiteeReaderPDF
Abstract
Explaining
the
decisions
made
by
population‐based
metaheuristics
can
often
be
considered
difficult
due
to
stochastic
nature
of
mechanisms
employed
these
optimisation
methods.
As
industries
continue
adopt
methods
in
areas
that
increasingly
require
end‐user
input
and
confirmation,
need
explain
internal
being
has
grown.
In
this
article,
we
present
our
approach
extraction
explanation
supporting
features
using
trajectory
mining.
This
is
achieved
through
application
principal
components
analysis
techniques
identify
new
tracking
population
diversity
changes
post‐runtime.
The
algorithm
search
trajectories
were
generated
solving
a
set
benchmark
problems
with
genetic
univariate
estimation
distribution
retaining
all
visited
candidate
solutions
which
then
projected
lower
dimensional
sub‐space.
We
also
varied
selection
pressure
placed
on
high
fitness
altering
operators.
Our
results
show
metrics
derived
from
sub‐space
are
capable
capturing
key
learning
steps
how
solution
variable
patterns
function
may
captured
component
coefficients.
A
comparative
study
importance
rankings
surrogate
model
built
same
dataset
was
performed.
both
approaches
identifying
regarding
interactions
their
influence
complimentary
fashion.
arXiv (Cornell University),
Год журнала:
2023,
Номер
unknown
Опубликована: Янв. 1, 2023
In
black-box
optimization,
it
is
essential
to
understand
why
an
algorithm
instance
works
on
a
set
of
problem
instances
while
failing
others
and
provide
explanations
its
behavior.
We
propose
methodology
for
formulating
footprint
that
consists
are
easy
be
solved
difficult
solved,
instance.
This
behavior
the
further
linked
landscape
properties
which
make
some
or
challenging.
The
proposed
uses
meta-representations
embed
performance
into
same
vector
space.
These
obtained
by
training
supervised
machine
learning
regression
model
prediction
applying
explainability
techniques
assess
importance
features
predictions.
Next,
deterministic
clustering
demonstrates
using
them
captures
across
space
detects
regions
poor
good
performance,
together
with
explanation
leading
it.