Enhancing Structured Query Language Injection Detection with Trustworthy Ensemble Learning and Boosting Models Using Local Explanation Techniques DOI Open Access
Thi-Thu-Huong Le, Yeonsang Hwang, Changwoo Choi

et al.

Electronics, Journal Year: 2024, Volume and Issue: 13(22), P. 4350 - 4350

Published: Nov. 6, 2024

This paper presents a comparative analysis of several decision models for detecting Structured Query Language (SQL) injection attacks, which remain one the most prevalent and serious security threats to web applications. SQL enables attackers exploit databases, gain unauthorized access, manipulate data. Traditional detection methods often struggle due constantly evolving nature these increasing complexity modern applications, lack transparency in decision-making processes machine learning models. To address challenges, we evaluated performance various models, including tree, random forest, XGBoost, AdaBoost, Gradient Boosting Decision Tree (GBDT), Histogram (HGBDT), using comprehensive dataset. The primary motivation behind our approach is leverage strengths ensemble boosting techniques enhance accuracy robustness against attacks. By systematically comparing aim identify effective algorithms systems. Our experiments show that AdaBoost achieved highest performance, with an 99.50% F1 score 99.33%. Additionally, applied SHapley Additive exPlanations (SHAPs) Local Interpretable Model-agnostic Explanations (LIMEs) local explainability, illustrating how each model classifies normal attack cases. enhances trustworthiness These findings highlight potential provide reliable efficient solutions thereby improving

Language: Английский

Toward Enhanced Attack Detection and Explanation in Intrusion Detection System-Based IoT Environment Data DOI Creative Commons
Thi-Thu-Huong Le, Rini Wisnu Wardhani, Dedy Septono Catur Putranto

et al.

IEEE Access, Journal Year: 2023, Volume and Issue: 11, P. 131661 - 131676

Published: Jan. 1, 2023

Securing the Internet of Things (IoT) against cyber threats is a formidable challenge, and Intrusion Detection Systems (IDS) play critical role in this effort. However, lack transparent explanations for IDS decisions remains significant concern. In response, we introduce novel approach that leverages blending model attack classification integrates counterfactual Local Interpretable Model-Agnostic Explanations (LIME) techniques to enhance explanations. To assess effectiveness our approach, conducted experiments using recently introduced CICIoT2023 IoTID20 datasets. These datasets are real-time large-scale benchmark IoT environment attacks, offering realistic challenging scenario captures intricacies intrusion detection dynamic environments. Our experimental results demonstrate improvements accuracy compared conventional methods. Furthermore, proposed provides clear interpretable insights into factors influencing decisions, empowering users make informed security choices. Integrating explanation enhances reliability systems. Therefore, work represents advancement detection, robust defense cyber-attacks data.

Language: Английский

Citations

25

On Explainability of Reinforcement Learning-Based Machine Learning Agents Trained with Proximal Policy Optimization That Utilizes Visual Sensor Data DOI Creative Commons
Tomasz Hachaj, Marcin Piekarczyk

Applied Sciences, Journal Year: 2025, Volume and Issue: 15(2), P. 538 - 538

Published: Jan. 8, 2025

In this paper, we address the issues of explainability reinforcement learning-based machine learning agents trained with Proximal Policy Optimization (PPO) that utilizes visual sensor data. We propose an algorithm allows effective and intuitive approximation PPO-trained neural network (NN). conduct several experiments to confirm our method’s effectiveness. Our proposed method works well for scenarios where semantic clustering scene is possible. approach based on solid theoretical foundation Gradient-weighted Class Activation Mapping (GradCAM) Classification Regression Tree additional proxy geometry heuristics. It excels in explanation process a virtual simulation system video relatively low resolution. Depending convolutional feature extractor network, obtains 0.945 0.968 accuracy black-box model. The has important application aspects. Through its use, it possible estimate causes specific decisions made by due current state observed environment. This estimation makes determine whether as expected (decision-making related model’s observation objects belonging different classes environment) detect unexpected, seemingly chaotic behavior might be, example, result data bias, bad design reward function or insufficient generalization abilities publish all source codes so can be reproduced.

Language: Английский

Citations

1

Leveraging explainable AI for informed building retrofit decisions: Insights from a survey DOI Creative Commons
Daniel Leuthe,

Jonas Mirlach,

Simon Wenninger

et al.

Energy and Buildings, Journal Year: 2024, Volume and Issue: 318, P. 114426 - 114426

Published: Sept. 1, 2024

Accurate predictions of building energy consumption are essential for reducing the performance gap. While data-driven quantification methods based on machine learning deliver promising results, lack Explainability prevents their widespread application. To overcome this, Explainable Artificial Intelligence (XAI) was introduced. However, to this point, no research has examined how effective these explanations concerning decision-makers, i.e., property owners. address we implement three transparent models (Linear Regression, Decision Tree, QLattice) and apply four XAI (Partial Dependency Plots, Accumulated Local Effects, Interpretable Model-Agnostic Explanations, Shapley Additive Explanations) an Neural Network using a real-world dataset 25,000 residential buildings. We evaluate Prediction Accuracy through survey with 137 participants considering human-centered dimensions explanation satisfaction perceived fidelity. The results quantify Explainability-Accuracy trade-off in forecasting it can be counteracted by choosing right method foster informed retrofit decisions. For research, set foundation further increasing evaluation. practice, encourage reduce acceptance gap methods, whereby should selected carefully, as within varies up 10 %.

Language: Английский

Citations

5

Improving Efficiency Through AI-Powered Customer Engagement by Providing Personalized Solutions in the Banking Industry DOI

Buddhika Nishadi Kaluarachchi,

Darshana Sedera

Advances in marketing, customer relationship management, and e-services book series, Journal Year: 2024, Volume and Issue: unknown, P. 299 - 342

Published: July 26, 2024

Artificial intelligence (AI) is revolutionizing banking by improving client engagement and operational efficiency with personalized solutions. This chapter analyses how AI-powered customer enhances operations customizes AI tools help banks learn preferences behaviors analyzing massive volumes of data, supporting a customer-centric strategy that promotes happiness loyalty. The reviews prominent banks' deployments case studies, addresses data protection, ethics, regulatory compliance, offers advice for seeking competitive advantage. also discusses trends like better credit evaluation, services, fraud protection. Banks can improve provide experiences using AI-driven service marketing. For professionals interested in to create edge, this provides practical tactics, insights, recommendations successful adoption financial services.

Language: Английский

Citations

4

A Systematic Literature Review on Artificial Intelligence and Explainable Artificial Intelligence for Visual Quality Assurance in Manufacturing DOI Open Access
Rudolf Hoffmann, Christoph Reich

Electronics, Journal Year: 2023, Volume and Issue: 12(22), P. 4572 - 4572

Published: Nov. 8, 2023

Quality assurance (QA) plays a crucial role in manufacturing to ensure that products meet their specifications. However, manual QA processes are costly and time-consuming, thereby making artificial intelligence (AI) an attractive solution for automation expert support. In particular, convolutional neural networks (CNNs) have gained lot of interest visual inspection. Next AI methods, the explainable (XAI) systems, which achieve transparency interpretability by providing insights into decision-making process AI, interesting methods achieveing quality inspections processes. this study, we conducted systematic literature review (SLR) explore XAI approaches (VQA) manufacturing. Our objective was assess current state art identify research gaps context. findings revealed AI-based systems predominantly focused on control (VQC) defect detection. Research addressing VQA practices, like optimization, predictive maintenance, or root cause analysis, more rare. Least often cited papers utilize methods. conclusion, survey emphasizes importance potential across various industries. By integrating XAI, organizations can enhance model transparency, interpretability, trust systems. Overall, leveraging improves practices

Language: Английский

Citations

11

Designing Explainable Artificial Intelligence with Active Inference: A Framework for Transparent Introspection and Decision-Making DOI
Mahault Albarracin, Inês Hipólito,

Safae Essafi Tremblay

et al.

Communications in computer and information science, Journal Year: 2023, Volume and Issue: unknown, P. 123 - 144

Published: Nov. 15, 2023

Language: Английский

Citations

10

Future Research Directions DOI Open Access
Shamneesh Sharma,

Neha Kumra,

Meghna Luthra

et al.

Published: March 7, 2025

Language: Английский

Citations

0

A distribution-preserving method for resampling combined with LightGBM-LSTM for sequence-wise fraud detection in credit card transactions DOI
Behnam Yousefimehr, Mehdi Ghatee

Expert Systems with Applications, Journal Year: 2024, Volume and Issue: unknown, P. 125661 - 125661

Published: Nov. 1, 2024

Language: Английский

Citations

2

On the Black-box Explainability of Object Detection Models for Safe and Trustworthy Industrial Applications DOI Creative Commons
Alain Andrés, Aitor Martínez-Seras, Ibai Laña

et al.

Results in Engineering, Journal Year: 2024, Volume and Issue: unknown, P. 103498 - 103498

Published: Nov. 1, 2024

Language: Английский

Citations

2

Explainability and Interpretability Concepts for Edge AI Systems DOI Creative Commons

Ovidiu Vermesan,

Vincenzo Piuri, Fabio Scotti

et al.

River Publishers eBooks, Journal Year: 2024, Volume and Issue: unknown, P. 197 - 227

Published: Feb. 7, 2024

The increased complexity of artificial intelligence (AI), machine learning (ML) and deep (DL) methods, models, training data to satisfy industrial application needs has emphasised the need for AI model providing explainability interpretability.Model Explainability aims commu nicate reasoning AI/ML/DL technology end users, while interpretability focuses on in-powering transparency so that users will understand precisely why how a generates its results.Edge AI, which combines Internet Things (IoT) edge com puting enable real-time collection, processing, analytics, decisionmaking, introduces new challenges acheiving explainable interpretable methods.This is due compromises among performance, constrained resources, complexity, power consumption, lack bench marking standardisation in environments.This chapter presents state play inter pretability methods techniques, discussing different benchmarking approaches highlighting state-of-the-art development directions.

Language: Английский

Citations

1