Ingénierie des systèmes d information,
Год журнала:
2024,
Номер
29(1), С. 83 - 93
Опубликована: Фев. 27, 2024
In
this
paper,
we
have
developed
a
description
of
an
agent-based
model
for
simulating
the
evacuation
crowds
from
complex
physical
spaces
escaping
dangerous
situations.The
describes
space
containing
set
differently
shaped
fences,
and
obstacles,
exit
door.The
pedestrians
comprising
crowd
moving
in
order
to
be
evacuated
are
described
as
intelligent
agents
with
supervised
machine
learning
using
perception-based
data
perceive
particular
environment
differently.The
is
Python
language
where
its
execution
represents
simulation.Before
simulation,
can
validated
animation
written
same
fix
possible
problems
description.A
performance
evaluation
presented
analysis
simulation
results,
showing
that
these
results
very
encouraging.
it - Information Technology,
Год журнала:
2025,
Номер
unknown
Опубликована: Апрель 30, 2025
Abstract
As
artificial
intelligence
(AI)
increasingly
permeates
high-stakes
domains
such
as
healthcare,
transportation,
and
law
enforcement,
ensuring
its
trustworthiness
has
become
a
critical
challenge.
This
article
proposes
an
integrative
Explainable
AI
(XAI)
framework
to
address
the
challenges
of
interpretability,
explainability,
interactivity,
robustness.
By
combining
XAI
methods,
incorporating
human-AI
interaction
using
suitable
evaluation
techniques,
implementation
this
serves
holistic
approach.
The
discusses
framework’s
contribution
trustworthy
gives
outlook
on
open
related
interdisciplinary
collaboration,
generalization
evaluation.
For
social
robots
to
be
able
operate
in
unstructured
public
spaces,
they
need
gauge
complex
factors
such
as
human-robot
engagement
and
inter-person
groups,
decide
how
with
whom
interact.
Additionally,
should
explain
their
decisions
after
the
fact,
improve
accountability
confidence
behavior.
To
address
this,
we
present
a
two-layered
proactive
system
that
extracts
high-level
features
from
low-level
perceptions
uses
these
make
regarding
initiation
maintenance
of
human
robot
interactions.
With
this
outlined,
primary
focus
work
is
then
novel
method
generate
counterfactual
explanations
response
variety
contrastive
queries.
We
provide
an
early
proof
concept
illustrate
can
generated
by
leveraging
two-layer
system.
With
the
increasing
adoption
of
Artificial
Intelligence
(AI)
systems
in
high-stake
domains,
such
as
healthcare,
effective
collaboration
between
domain
experts
and
AI
is
imperative.
To
facilitate
systems,
we
introduce
an
Explanatory
Model
Steering
system
that
allows
to
steer
prediction
models
using
their
knowledge.
The
includes
explanation
dashboard
combines
different
types
data-centric
model-centric
explanations
be
steered
through
manual
automated
data
configuration
approaches.
It
apply
prior
knowledge
for
configuring
underlying
training
refining
models.
Additionally,
our
model
steering
has
been
evaluated
a
healthcare-focused
scenario
with
174
healthcare
three
extensive
user
studies.
Our
findings
highlight
importance
involving
during
steering,
ultimately
leading
improved
human-AI
collaboration.
Deleted Journal,
Год журнала:
2024,
Номер
unknown, С. 1 - 13
Опубликована: Авг. 28, 2024
Deep
learning
is
being
very
successful
in
supporting
humans
the
interpretation
of
complex
data
(such
as
images
and
text)
for
critical
decision
tasks.
However,
it
still
remains
difficult
human
experts
to
understand
how
such
results
are
achieved,
due
“black
box”
nature
deep
models
used.
In
high-stake
making
scenarios
medical
imaging
diagnostics,
a
lack
transparency
hinders
adoption
these
techniques
practice.
this
position
paper
we
present
conceptual
methodology
design
neuro-symbolic
cycle
address
need
explainability
confidence
(including
trust)
when
used
support
making,
discuss
challenges
opportunities
implementation
well
its
real
world
scenarios.
We
elaborate
on
leverage
potential
hybrid
artificial
intelligence
combining
neural
symbolic
reasoning
human-centered
approach
explainability.
advocate
that
phases
should
include
i)
extraction
knowledge
from
trained
network
represent
encode
behaviour,
ii)
validation
extracted
through
commonsense
domain
knowledge,
iii)
generation
explanations
experts,
iv)
ability
map
feedback
into
validated
representation
i),
v)
injection
some
non-trained
enable
knowledge-informed
learning.
The
holistic
combination
causality,
expressive
logical
inference,
learning,
would
result
seamless
integration
(neural)
(cognitive)
makes
possible
retain
access
inherently
explainable
without
losing
power
representation.
involvement
design,
process
crucial,
paves
way
new
human–ai
paradigm
where
role
goes
beyond
labeling
data,
towards
neural-cognitive
processes.
2021 IEEE/CVF International Conference on Computer Vision (ICCV),
Год журнала:
2023,
Номер
unknown, С. 1922 - 1933
Опубликована: Окт. 1, 2023
Despite
being
highly
performant,
deep
neural
networks
might
base
their
decisions
on
features
that
spuriously
correlate
with
the
provided
labels,
thus
hurting
generalization.
To
mitigate
this,
'model
guidance'
has
recently
gained
popularity,
i.e.
idea
of
regularizing
models'
explanations
to
ensure
they
are
"right
for
right
reasons"
[49].
While
various
techniques
achieve
such
model
guidance
have
been
proposed,
experimental
validation
these
approaches
far
limited
relatively
simple
and
/
or
synthetic
datasets.
better
understand
effectiveness
design
choices
explored
in
context
guidance,
this
work
we
conduct
an
in-depth
evaluation
across
loss
functions,
attribution
methods,
models,
'guidance
depths'
PASCAL
VOC
2007
MS
COCO
2014
As
annotation
costs
can
limit
its
applicability,
also
place
a
particular
focus
efficiency.
Specifically,
guide
models
via
bounding
box
annotations,
which
much
cheaper
obtain
than
commonly
used
segmentation
masks,
evaluate
robustness
under
(e.g.
only
1%
annotated
images)
overly
coarse
annotations.
Further,
propose
using
EPG
score
as
additional
metric
function
('Energy
loss').
We
show
optimizing
Energy
leads
exhibit
distinct
object-specific
features,
despite
annotations
include
background
regions.
Lastly,
improve
generalization
distribution
shifts.
Code
available
at:
https://github.com/sukrutrao/Model-Guidance
Users
with
large
domain
knowledge
can
be
reluctant
to
use
prediction
models.
This
also
applies
the
sports
domain,
where
running
coaches
rarely
rely
on
marathon
tools
for
race-plan
advice
their
runners'
next
marathon.
paper
studies
effect
of
adding
interactivity
such
models,
incorporate
and
acknowledge
users'
knowledge.
In
think-aloud
sessions
an
online
study,
we
tested
interactive
machine
learning
tool
that
allowed
indicate
importance
earlier
races
feeding
into
model.
Our
results
show
deploy
rich
when
working
model
runners
familiar
them,
adaptations
improved
accuracy.
Those
who
could
interact
displayed
more
trust
acceptance
in
resulting
predictions.
Biases
in
Artificial
Intelligence
(AI)
or
Machine
Learning
(ML)
systems
due
to
skewed
datasets
problematise
the
application
of
prediction
models
practice.
Representation
bias
is
a
prevalent
form
found
majority
datasets.
This
arises
when
training
data
inadequately
represents
certain
segments
space,
resulting
poor
generalisation
models.
Despite
AI
practitioners
employing
various
methods
mitigate
representation
bias,
their
effectiveness
often
limited
lack
thorough
domain
knowledge.
To
address
this
limitation,
paper
introduces
human-in-the-loop
interaction
approaches
for
debiasing
generated
involving
experts.
Our
work
advocates
controlled
generation
process
experts
effectively
effects
bias.
We
argue
that
can
leverage
expertise
assess
how
affects
Moreover,
our
facilitate
steering
augmentation
algorithms
produce
debiased
augmented
and
validate
refine
samples
reduce
also
discuss
these
be
leveraged
designing
developing
user-centred
impact
through
effective
collaboration
between
AI.
arXiv (Cornell University),
Год журнала:
2023,
Номер
unknown
Опубликована: Янв. 1, 2023
Despite
being
highly
performant,
deep
neural
networks
might
base
their
decisions
on
features
that
spuriously
correlate
with
the
provided
labels,
thus
hurting
generalization.
To
mitigate
this,
'model
guidance'
has
recently
gained
popularity,
i.e.
idea
of
regularizing
models'
explanations
to
ensure
they
are
"right
for
right
reasons".
While
various
techniques
achieve
such
model
guidance
have
been
proposed,
experimental
validation
these
approaches
far
limited
relatively
simple
and
/
or
synthetic
datasets.
better
understand
effectiveness
design
choices
explored
in
context
guidance,
this
work
we
conduct
an
in-depth
evaluation
across
loss
functions,
attribution
methods,
models,
'guidance
depths'
PASCAL
VOC
2007
MS
COCO
2014
As
annotation
costs
can
limit
its
applicability,
also
place
a
particular
focus
efficiency.
Specifically,
guide
models
via
bounding
box
annotations,
which
much
cheaper
obtain
than
commonly
used
segmentation
masks,
evaluate
robustness
under
(e.g.
only
1%
annotated
images)
overly
coarse
annotations.
Further,
propose
using
EPG
score
as
additional
metric
function
('Energy
loss').
We
show
optimizing
Energy
leads
exhibit
distinct
object-specific
features,
despite
annotations
include
background
regions.
Lastly,
improve
generalization
distribution
shifts.
Code
available
at:
https://github.com/sukrutrao/Model-Guidance.