Canadian Journal of Civil Engineering,
Год журнала:
2024,
Номер
51(5), С. 529 - 544
Опубликована: Янв. 11, 2024
Effective
winter
road
maintenance
relies
on
precise
friction
estimation.
Machine
learning
(ML)
models
have
shown
significant
promise
in
this;
however,
their
inherent
complexity
makes
understanding
inner
workings
challenging.
This
paper
addresses
this
issue
by
conducting
a
comparative
analysis
of
estimation
using
four
ML
methods,
including
regression
tree,
random
forest,
eXtreme
Gradient
Boosting
(XGBoost),
and
support
vector
(SVR).
We
then
employ
the
SHapley
Additive
exPlanations
(SHAP)
explainable
artificial
intelligence
(AI)
to
enhance
model
interpretability.
Our
an
Alberta
dataset
reveals
that
XGBoost
performs
best
with
accuracy
91.39%.
The
SHAP
illustrates
logical
relationships
between
predictor
features
within
all
three
tree-based
models,
but
it
also
uncovers
inconsistencies
SVR
model,
potentially
attributed
insufficient
feature
interactions.
Thus,
not
only
showcase
role
AI
improving
interpretability
for
estimation,
provides
practical
insights
could
improve
decisions.
Machine Learning and Knowledge Extraction,
Год журнала:
2024,
Номер
6(2), С. 1170 - 1192
Опубликована: Май 27, 2024
Artificial
Intelligence
(AI)
plays
an
increasingly
integral
role
in
decision-making
processes.
In
order
to
foster
trust
AI
predictions,
many
approaches
towards
explainable
(XAI)
have
been
developed
and
evaluated.
Surprisingly,
one
factor
that
is
essential
for
has
underrepresented
XAI
research
so
far:
uncertainty,
both
with
respect
how
it
modeled
Machine
Learning
(ML)
as
well
perceived
by
humans
relying
on
assistance.
This
review
paper
provides
in-depth
analysis
of
aspects.
We
established
recent
methods
account
uncertainty
ML
models
we
discuss
empirical
evidence
model
human
users
systems.
summarize
the
methodological
advancements
limitations
perception.
Finally,
implications
current
state
art
development
believe
highlighting
will
be
helpful
practitioners
researchers
could
ultimately
support
more
responsible
use
practical
applications.
American Journal of Industrial Medicine,
Год журнала:
2024,
Номер
unknown
Опубликована: Сен. 2, 2024
Artificial
intelligence
(AI)-the
field
of
computer
science
that
designs
machines
to
perform
tasks
typically
require
human
intelligence-has
seen
rapid
advances
in
the
development
foundation
systems
such
as
large
language
models.
In
workplace,
adoption
AI
technologies
can
result
a
broad
range
hazards
and
risks
workers,
illustrated
by
recent
growth
industrial
robotics
algorithmic
management.
Sources
risk
from
deployment
across
society
workplace
have
led
numerous
government
private
sector
guidelines
propose
principles
governing
design
use
trustworthy
ethical
AI.
As
capabilities
become
integrated
devices,
machines,
industry
sectors,
employers,
occupational
safety
health
practitioners
will
be
challenged
manage
worker
health,
safety,
well-being.
Five
management
options
are
presented
ways
assure
only
enables
machinery,
processes.
play
significant
role
future
work.
The
practice
research
communities
need
ensure
promise
these
new
results
benefit,
not
harm,
workers.
Explainable
Artificial
Intelligence
(XAI)
has
emerged
as
a
crucial
research
area
to
address
the
interpretability
challenges
posed
by
complex
machine
learning
models.
In
this
survey
paper,
we
provide
comprehensive
analysis
of
existing
approaches
in
field
XAI,
focusing
on
tradeoff
between
model
accuracy
and
interpretability.
Motivated
need
tradeoff,
conduct
an
extensive
review
literature,
presenting
multi-view
taxonomy
that
offers
new
perspective
XAI
methodologies.
We
analyze
various
sub-categories
methods,
considering
their
strengths,
weaknesses,
practical
challenges.
Moreover,
explore
causal
relationships
explanations
discuss
dedicated
explaining
cross-domain
classifiers.
The
latter
is
particularly
important
scenarios
where
training
test
data
are
sampled
from
different
distributions.
Drawing
insights
our
analysis,
propose
future
directions,
including
exploring
explainable
allied
paradigms,
developing
evaluation
metrics
for
both
traditionally
trained
learning-based
classifiers,
applying
neural
architectural
search
techniques
minimize
accuracy–interpretability
tradeoff.
This
paper
provides
overview
state-of-the-art
serving
valuable
resource
researchers
practitioners
interested
understanding
advancing
field.
Canadian Journal of Civil Engineering,
Год журнала:
2024,
Номер
51(5), С. 529 - 544
Опубликована: Янв. 11, 2024
Effective
winter
road
maintenance
relies
on
precise
friction
estimation.
Machine
learning
(ML)
models
have
shown
significant
promise
in
this;
however,
their
inherent
complexity
makes
understanding
inner
workings
challenging.
This
paper
addresses
this
issue
by
conducting
a
comparative
analysis
of
estimation
using
four
ML
methods,
including
regression
tree,
random
forest,
eXtreme
Gradient
Boosting
(XGBoost),
and
support
vector
(SVR).
We
then
employ
the
SHapley
Additive
exPlanations
(SHAP)
explainable
artificial
intelligence
(AI)
to
enhance
model
interpretability.
Our
an
Alberta
dataset
reveals
that
XGBoost
performs
best
with
accuracy
91.39%.
The
SHAP
illustrates
logical
relationships
between
predictor
features
within
all
three
tree-based
models,
but
it
also
uncovers
inconsistencies
SVR
model,
potentially
attributed
insufficient
feature
interactions.
Thus,
not
only
showcase
role
AI
improving
interpretability
for
estimation,
provides
practical
insights
could
improve
decisions.