Canadian Journal of Civil Engineering,
Год журнала:
2024,
Номер
51(5), С. 529 - 544
Опубликована: Янв. 11, 2024
Effective
winter
road
maintenance
relies
on
precise
friction
estimation.
Machine
learning
(ML)
models
have
shown
significant
promise
in
this;
however,
their
inherent
complexity
makes
understanding
inner
workings
challenging.
This
paper
addresses
this
issue
by
conducting
a
comparative
analysis
of
estimation
using
four
ML
methods,
including
regression
tree,
random
forest,
eXtreme
Gradient
Boosting
(XGBoost),
and
support
vector
(SVR).
We
then
employ
the
SHapley
Additive
exPlanations
(SHAP)
explainable
artificial
intelligence
(AI)
to
enhance
model
interpretability.
Our
an
Alberta
dataset
reveals
that
XGBoost
performs
best
with
accuracy
91.39%.
The
SHAP
illustrates
logical
relationships
between
predictor
features
within
all
three
tree-based
models,
but
it
also
uncovers
inconsistencies
SVR
model,
potentially
attributed
insufficient
feature
interactions.
Thus,
not
only
showcase
role
AI
improving
interpretability
for
estimation,
provides
practical
insights
could
improve
decisions.
Management Decision,
Год журнала:
2024,
Номер
unknown
Опубликована: Июнь 12, 2024
Purpose
This
study
investigates
the
profound
impact
of
artificial
intelligence
(AI)
capabilities
on
decision-making
processes
and
organizational
performance,
addressing
a
crucial
gap
in
literature
by
exploring
mediating
role
speed
quality.
Design/methodology/approach
Drawing
upon
resource-based
theory
prior
research,
this
constructs
comprehensive
model
hypotheses
to
illuminate
influence
AI
within
organizations
speed,
decision
quality,
and,
ultimately,
performance.
A
dataset
comprising
230
responses
from
diverse
forms
basis
analysis,
with
employing
partial
least
squares
structural
equation
(PLS-SEM)
for
robust
data
examination.
Findings
The
results
demonstrate
pivotal
shaping
capability
significantly
positively
affects
overall
Notably,
is
critical
factor
contributing
enhanced
further
uncovered
mediation
effects,
suggesting
that
partially
mediate
relationship
between
performance
through
speed.
Originality/value
contributes
existing
body
providing
empirical
evidence
multifaceted
Elucidating
advances
our
understanding
complex
mechanisms
which
drive
success.
npj 2D Materials and Applications,
Год журнала:
2025,
Номер
9(1)
Опубликована: Фев. 1, 2025
MXenes
are
a
versatile
family
of
2D
inorganic
materials
with
applications
in
energy
storage,
shielding,
sensing,
and
catalysis.
This
review
highlights
computational
studies
using
density
functional
theory
machine-learning
approaches
to
explore
their
structure
(stacking,
functionalization,
doping),
properties
(electronic,
mechanical,
magnetic),
application
potential.
Key
advances
challenges
critically
examined,
offering
insights
into
applying
research
transition
these
from
the
lab
practical
use.
Scientific Reports,
Год журнала:
2025,
Номер
15(1)
Опубликована: Янв. 28, 2025
Abstract
Glaucoma
poses
a
growing
health
challenge
projected
to
escalate
in
the
coming
decades.
However,
current
automated
diagnostic
approaches
on
diagnosis
solely
rely
black-box
deep
learning
models,
lacking
explainability
and
trustworthiness.
To
address
issue,
this
study
uses
optical
coherence
tomography
(OCT)
images
develop
an
explainable
artificial
intelligence
(XAI)
tool
for
diagnosing
staging
glaucoma,
with
focus
its
clinical
applicability.
A
total
of
334
normal
268
glaucomatous
eyes
(86
early,
72
moderate,
110
advanced)
were
included,
signal
processing
theory
was
employed,
model
interpretability
rigorously
evaluated.
Leveraging
SHapley
Additive
exPlanations
(SHAP)-based
global
feature
ranking
partial
dependency
analysis
(PDA)
estimated
decision
boundary
cut-offs
machine
(ML)
novel
algorithm
developed
implement
XAI
tool.
Using
selected
features,
ML
models
produce
AUC
0.96
(95%
CI:
0.95–0.98),
0.98
0.96–1.00)
1.00
1.00–1.00)
respectively
differentiating
moderate
advanced
glaucoma
patients.
Overall,
outperformed
clinicians
early
stage
overall
10.4
–11.2%
higher
accuracy.
The
user-friendly
software
shows
potential
as
valuable
eye
care
practitioners,
offering
transparent
interpretable
insights
improve
decision-making.
Electronic Markets,
Год журнала:
2022,
Номер
32(4), С. 2079 - 2102
Опубликована: Окт. 23, 2022
Abstract
Contemporary
decision
support
systems
are
increasingly
relying
on
artificial
intelligence
technology
such
as
machine
learning
algorithms
to
form
intelligent
systems.
These
have
human-like
capacity
for
selected
applications
based
a
rationale
which
cannot
be
looked-up
conveniently
and
constitutes
black
box.
As
consequence,
acceptance
by
end-users
remains
somewhat
hesitant.
While
lacking
transparency
has
been
said
hinder
trust
enforce
aversion
towards
these
systems,
studies
that
connect
user
subsequently
scarce.
In
response,
our
research
is
concerned
with
the
development
of
theoretical
model
explains
end-user
We
utilize
unified
theory
use
in
information
well
explanation
related
theories
initial
The
proposed
tested
an
industrial
maintenance
workplace
scenario
using
experts
participants
represent
group.
Results
show
performance-driven
at
first
sight.
However,
plays
important
indirect
role
regulating
perception
performance.
International Journal of Intelligent Systems,
Год журнала:
2023,
Номер
2023, С. 1 - 41
Опубликована: Окт. 26, 2023
Given
the
tremendous
potential
and
influence
of
artificial
intelligence
(AI)
algorithmic
decision-making
(DM),
these
systems
have
found
wide-ranging
applications
across
diverse
fields,
including
education,
business,
healthcare
industries,
government,
justice
sectors.
While
AI
DM
offer
significant
benefits,
they
also
carry
risk
unfavourable
outcomes
for
users
society.
As
a
result,
ensuring
safety,
reliability,
trustworthiness
becomes
crucial.
This
article
aims
to
provide
comprehensive
review
synergy
between
DM,
focussing
on
importance
trustworthiness.
The
addresses
following
four
key
questions,
guiding
readers
towards
deeper
understanding
this
topic:
(i)
why
do
we
need
trustworthy
AI?
(ii)
what
are
requirements
In
line
with
second
question,
that
establish
been
explained,
explainability,
accountability,
robustness,
fairness,
acceptance
AI,
privacy,
accuracy,
reproducibility,
human
agency,
oversight.
(iii)
how
can
data?
(iv)
priorities
in
terms
challenging
applications?
Regarding
last
six
different
discussed,
environmental
science,
5G-based
IoT
networks,
robotics
architecture,
engineering
construction,
financial
technology,
healthcare.
emphasises
address
before
their
deployment
order
achieve
goal
good.
An
example
is
provided
demonstrates
be
employed
eliminate
bias
resources
management
systems.
insights
recommendations
presented
paper
will
serve
as
valuable
guide
researchers
seeking
applications.
Nature Communications,
Год журнала:
2024,
Номер
15(1)
Опубликована: Янв. 13, 2024
Abstract
Antimicrobial
resistance
(AMR)
and
healthcare
associated
infections
pose
a
significant
threat
globally.
One
key
prevention
strategy
is
to
follow
antimicrobial
stewardship
practices,
in
particular,
maximise
targeted
oral
therapy
reduce
the
use
of
indwelling
vascular
devices
for
intravenous
(IV)
administration.
Appreciating
when
an
individual
patient
can
switch
from
IV
antibiotic
treatment
often
non-trivial
not
standardised.
To
tackle
this
problem
we
created
machine
learning
model
predict
could
based
on
routinely
collected
clinical
parameters.
10,362
unique
intensive
care
unit
stays
were
extracted
two
informative
feature
sets
identified.
Our
best
achieved
mean
AUROC
0.80
(SD
0.01)
hold-out
set
while
being
biased
individuals
protected
characteristics.
Interpretability
methodologies
employed
create
clinically
useful
visual
explanations.
In
summary,
our
provides
individualised,
fair,
interpretable
predictions
IV-to-oral
treatment.
Prospectively
evaluation
safety
efficacy
needed
before
such
technology
be
applied
clinically.