Electronics,
Journal Year:
2024,
Volume and Issue:
13(22), P. 4350 - 4350
Published: Nov. 6, 2024
This
paper
presents
a
comparative
analysis
of
several
decision
models
for
detecting
Structured
Query
Language
(SQL)
injection
attacks,
which
remain
one
the
most
prevalent
and
serious
security
threats
to
web
applications.
SQL
enables
attackers
exploit
databases,
gain
unauthorized
access,
manipulate
data.
Traditional
detection
methods
often
struggle
due
constantly
evolving
nature
these
increasing
complexity
modern
applications,
lack
transparency
in
decision-making
processes
machine
learning
models.
To
address
challenges,
we
evaluated
performance
various
models,
including
tree,
random
forest,
XGBoost,
AdaBoost,
Gradient
Boosting
Decision
Tree
(GBDT),
Histogram
(HGBDT),
using
comprehensive
dataset.
The
primary
motivation
behind
our
approach
is
leverage
strengths
ensemble
boosting
techniques
enhance
accuracy
robustness
against
attacks.
By
systematically
comparing
aim
identify
effective
algorithms
systems.
Our
experiments
show
that
AdaBoost
achieved
highest
performance,
with
an
99.50%
F1
score
99.33%.
Additionally,
applied
SHapley
Additive
exPlanations
(SHAPs)
Local
Interpretable
Model-agnostic
Explanations
(LIMEs)
local
explainability,
illustrating
how
each
model
classifies
normal
attack
cases.
enhances
trustworthiness
These
findings
highlight
potential
provide
reliable
efficient
solutions
thereby
improving
IEEE Access,
Journal Year:
2023,
Volume and Issue:
11, P. 131661 - 131676
Published: Jan. 1, 2023
Securing
the
Internet
of
Things
(IoT)
against
cyber
threats
is
a
formidable
challenge,
and
Intrusion
Detection
Systems
(IDS)
play
critical
role
in
this
effort.
However,
lack
transparent
explanations
for
IDS
decisions
remains
significant
concern.
In
response,
we
introduce
novel
approach
that
leverages
blending
model
attack
classification
integrates
counterfactual
Local
Interpretable
Model-Agnostic
Explanations
(LIME)
techniques
to
enhance
explanations.
To
assess
effectiveness
our
approach,
conducted
experiments
using
recently
introduced
CICIoT2023
IoTID20
datasets.
These
datasets
are
real-time
large-scale
benchmark
IoT
environment
attacks,
offering
realistic
challenging
scenario
captures
intricacies
intrusion
detection
dynamic
environments.
Our
experimental
results
demonstrate
improvements
accuracy
compared
conventional
methods.
Furthermore,
proposed
provides
clear
interpretable
insights
into
factors
influencing
decisions,
empowering
users
make
informed
security
choices.
Integrating
explanation
enhances
reliability
systems.
Therefore,
work
represents
advancement
detection,
robust
defense
cyber-attacks
data.
Applied Sciences,
Journal Year:
2025,
Volume and Issue:
15(2), P. 538 - 538
Published: Jan. 8, 2025
In
this
paper,
we
address
the
issues
of
explainability
reinforcement
learning-based
machine
learning
agents
trained
with
Proximal
Policy
Optimization
(PPO)
that
utilizes
visual
sensor
data.
We
propose
an
algorithm
allows
effective
and
intuitive
approximation
PPO-trained
neural
network
(NN).
conduct
several
experiments
to
confirm
our
method’s
effectiveness.
Our
proposed
method
works
well
for
scenarios
where
semantic
clustering
scene
is
possible.
approach
based
on
solid
theoretical
foundation
Gradient-weighted
Class
Activation
Mapping
(GradCAM)
Classification
Regression
Tree
additional
proxy
geometry
heuristics.
It
excels
in
explanation
process
a
virtual
simulation
system
video
relatively
low
resolution.
Depending
convolutional
feature
extractor
network,
obtains
0.945
0.968
accuracy
black-box
model.
The
has
important
application
aspects.
Through
its
use,
it
possible
estimate
causes
specific
decisions
made
by
due
current
state
observed
environment.
This
estimation
makes
determine
whether
as
expected
(decision-making
related
model’s
observation
objects
belonging
different
classes
environment)
detect
unexpected,
seemingly
chaotic
behavior
might
be,
example,
result
data
bias,
bad
design
reward
function
or
insufficient
generalization
abilities
publish
all
source
codes
so
can
be
reproduced.
Energy and Buildings,
Journal Year:
2024,
Volume and Issue:
318, P. 114426 - 114426
Published: Sept. 1, 2024
Accurate
predictions
of
building
energy
consumption
are
essential
for
reducing
the
performance
gap.
While
data-driven
quantification
methods
based
on
machine
learning
deliver
promising
results,
lack
Explainability
prevents
their
widespread
application.
To
overcome
this,
Explainable
Artificial
Intelligence
(XAI)
was
introduced.
However,
to
this
point,
no
research
has
examined
how
effective
these
explanations
concerning
decision-makers,
i.e.,
property
owners.
address
we
implement
three
transparent
models
(Linear
Regression,
Decision
Tree,
QLattice)
and
apply
four
XAI
(Partial
Dependency
Plots,
Accumulated
Local
Effects,
Interpretable
Model-Agnostic
Explanations,
Shapley
Additive
Explanations)
an
Neural
Network
using
a
real-world
dataset
25,000
residential
buildings.
We
evaluate
Prediction
Accuracy
through
survey
with
137
participants
considering
human-centered
dimensions
explanation
satisfaction
perceived
fidelity.
The
results
quantify
Explainability-Accuracy
trade-off
in
forecasting
it
can
be
counteracted
by
choosing
right
method
foster
informed
retrofit
decisions.
For
research,
set
foundation
further
increasing
evaluation.
practice,
encourage
reduce
acceptance
gap
methods,
whereby
should
selected
carefully,
as
within
varies
up
10
%.
Advances in marketing, customer relationship management, and e-services book series,
Journal Year:
2024,
Volume and Issue:
unknown, P. 299 - 342
Published: July 26, 2024
Artificial
intelligence
(AI)
is
revolutionizing
banking
by
improving
client
engagement
and
operational
efficiency
with
personalized
solutions.
This
chapter
analyses
how
AI-powered
customer
enhances
operations
customizes
AI
tools
help
banks
learn
preferences
behaviors
analyzing
massive
volumes
of
data,
supporting
a
customer-centric
strategy
that
promotes
happiness
loyalty.
The
reviews
prominent
banks'
deployments
case
studies,
addresses
data
protection,
ethics,
regulatory
compliance,
offers
advice
for
seeking
competitive
advantage.
also
discusses
trends
like
better
credit
evaluation,
services,
fraud
protection.
Banks
can
improve
provide
experiences
using
AI-driven
service
marketing.
For
professionals
interested
in
to
create
edge,
this
provides
practical
tactics,
insights,
recommendations
successful
adoption
financial
services.
Electronics,
Journal Year:
2023,
Volume and Issue:
12(22), P. 4572 - 4572
Published: Nov. 8, 2023
Quality
assurance
(QA)
plays
a
crucial
role
in
manufacturing
to
ensure
that
products
meet
their
specifications.
However,
manual
QA
processes
are
costly
and
time-consuming,
thereby
making
artificial
intelligence
(AI)
an
attractive
solution
for
automation
expert
support.
In
particular,
convolutional
neural
networks
(CNNs)
have
gained
lot
of
interest
visual
inspection.
Next
AI
methods,
the
explainable
(XAI)
systems,
which
achieve
transparency
interpretability
by
providing
insights
into
decision-making
process
AI,
interesting
methods
achieveing
quality
inspections
processes.
this
study,
we
conducted
systematic
literature
review
(SLR)
explore
XAI
approaches
(VQA)
manufacturing.
Our
objective
was
assess
current
state
art
identify
research
gaps
context.
findings
revealed
AI-based
systems
predominantly
focused
on
control
(VQC)
defect
detection.
Research
addressing
VQA
practices,
like
optimization,
predictive
maintenance,
or
root
cause
analysis,
more
rare.
Least
often
cited
papers
utilize
methods.
conclusion,
survey
emphasizes
importance
potential
across
various
industries.
By
integrating
XAI,
organizations
can
enhance
model
transparency,
interpretability,
trust
systems.
Overall,
leveraging
improves
practices
River Publishers eBooks,
Journal Year:
2024,
Volume and Issue:
unknown, P. 197 - 227
Published: Feb. 7, 2024
The
increased
complexity
of
artificial
intelligence
(AI),
machine
learning
(ML)
and
deep
(DL)
methods,
models,
training
data
to
satisfy
industrial
application
needs
has
emphasised
the
need
for
AI
model
providing
explainability
interpretability.Model
Explainability
aims
commu
nicate
reasoning
AI/ML/DL
technology
end
users,
while
interpretability
focuses
on
in-powering
transparency
so
that
users
will
understand
precisely
why
how
a
generates
its
results.Edge
AI,
which
combines
Internet
Things
(IoT)
edge
com
puting
enable
real-time
collection,
processing,
analytics,
decisionmaking,
introduces
new
challenges
acheiving
explainable
interpretable
methods.This
is
due
compromises
among
performance,
constrained
resources,
complexity,
power
consumption,
lack
bench
marking
standardisation
in
environments.This
chapter
presents
state
play
inter
pretability
methods
techniques,
discussing
different
benchmarking
approaches
highlighting
state-of-the-art
development
directions.