Ingénierie des systèmes d information,
Год журнала:
2024,
Номер
29(1), С. 83 - 93
Опубликована: Фев. 27, 2024
In
this
paper,
we
have
developed
a
description
of
an
agent-based
model
for
simulating
the
evacuation
crowds
from
complex
physical
spaces
escaping
dangerous
situations.The
describes
space
containing
set
differently
shaped
fences,
and
obstacles,
exit
door.The
pedestrians
comprising
crowd
moving
in
order
to
be
evacuated
are
described
as
intelligent
agents
with
supervised
machine
learning
using
perception-based
data
perceive
particular
environment
differently.The
is
Python
language
where
its
execution
represents
simulation.Before
simulation,
can
validated
animation
written
same
fix
possible
problems
description.A
performance
evaluation
presented
analysis
simulation
results,
showing
that
these
results
very
encouraging.
ACM Transactions on Interactive Intelligent Systems,
Год журнала:
2023,
Номер
13(4), С. 1 - 29
Опубликована: Март 28, 2023
Research
in
human-centered
AI
has
shown
the
benefits
of
systems
that
can
explain
their
predictions.
Methods
allow
to
take
advice
from
humans
response
explanations
are
similarly
useful.
While
both
capabilities
well
developed
for
transparent
learning
models
(e.g.,
linear
and
GA
2
Ms)
recent
techniques
LIME
SHAP)
generate
opaque
models,
little
attention
been
given
methods
models.
This
article
introduces
LIMEADE,
first
general
framework
translates
positive
negative
(expressed
using
high-level
vocabulary
such
as
employed
by
post
hoc
explanations)
into
an
update
arbitrary,
underlying
model.
We
demonstrate
generality
our
approach
with
case
studies
on
70
real-world
across
two
broad
domains:
image
classification
text
recommendation.
show
method
improves
accuracy
compared
a
rigorous
baseline
domains.
For
modality,
we
apply
neural
recommender
system
scientific
papers
public
website;
user
study
shows
leads
significantly
higher
perceived
control,
trust,
satisfaction.
As
the
field
of
explainable
AI
(XAI)
is
maturing,
calls
for
interactive
explanations
(the
outputs
of)
models
are
growing,
but
state-of-the-art
predominantly
focuses
on
static
explanations.
In
this
paper,
we
focus
instead
framed
as
conflict
resolution
between
agents
(i.e.
and/or
humans)
by
leveraging
computational
argumentation.
Specifically,
define
Argumentative
eXchanges
(AXs)
dynamically
sharing,
in
multi-agent
systems,
information
harboured
individual
agents’
quantitative
bipolar
argumentation
frameworks
towards
resolving
conflicts
amongst
agents.
We
then
deploy
AXs
XAI
setting
which
a
machine
and
human
interact
about
machine’s
predictions.
identify
assess
several
theoretical
properties
characterising
that
suitable
XAI.
Finally,
instantiate
defining
various
agent
behaviours,
e.g.
capturing
counterfactual
patterns
reasoning
machines
highlighting
effects
cognitive
biases
humans.
show
experimentally
(in
simulated
environment)
comparative
advantages
these
behaviours
terms
resolution,
strongest
argument
may
not
always
be
most
effective.
Entropy,
Год журнала:
2023,
Номер
25(12), С. 1574 - 1574
Опубликована: Ноя. 22, 2023
Research
on
Explainable
Artificial
Intelligence
has
recently
started
exploring
the
idea
of
producing
explanations
that,
rather
than
being
expressed
in
terms
low-level
features,
are
encoded
interpretable
concepts
learned
from
data.
How
to
reliably
acquire
such
is,
however,
still
fundamentally
unclear.
An
agreed-upon
notion
concept
interpretability
is
missing,
with
result
that
used
by
both
post
hoc
explainers
and
concept-based
neural
networks
acquired
through
a
variety
mutually
incompatible
strategies.
Critically,
most
these
neglect
human
side
problem:
representation
understandable
only
insofar
as
it
can
be
understood
at
receiving
end.
The
key
challenge
human-interpretable
learning
(hrl)
how
model
operationalize
this
element.
In
work,
we
propose
mathematical
framework
for
acquiring
representations
suitable
networks.
Our
formalization
hrl
builds
recent
advances
causal
explicitly
models
stakeholder
an
external
observer.
This
allows
us
derive
principled
alignment
between
machine's
vocabulary
human.
doing
so,
link
simple
intuitive
name
transfer
game,
clarify
relationship
well-known
property
representations,
namely
disentanglement.
We
also
show
linked
issue
undesirable
correlations
among
concepts,
known
leakage,
content-style
separation,
all
general
information-theoretic
reformulation
properties.
conceptualization
aims
bridge
gap
algorithmic
sides
establish
stepping
stone
new
research
representations.
Ingénierie des systèmes d information,
Год журнала:
2024,
Номер
29(1), С. 83 - 93
Опубликована: Фев. 27, 2024
In
this
paper,
we
have
developed
a
description
of
an
agent-based
model
for
simulating
the
evacuation
crowds
from
complex
physical
spaces
escaping
dangerous
situations.The
describes
space
containing
set
differently
shaped
fences,
and
obstacles,
exit
door.The
pedestrians
comprising
crowd
moving
in
order
to
be
evacuated
are
described
as
intelligent
agents
with
supervised
machine
learning
using
perception-based
data
perceive
particular
environment
differently.The
is
Python
language
where
its
execution
represents
simulation.Before
simulation,
can
validated
animation
written
same
fix
possible
problems
description.A
performance
evaluation
presented
analysis
simulation
results,
showing
that
these
results
very
encouraging.