Published: Sept. 2, 2024
Language: Английский
Published: Sept. 2, 2024
Language: Английский
Artificial Intelligence Review, Journal Year: 2024, Volume and Issue: 57(11)
Published: Sept. 18, 2024
Abstract In recent years, Advanced Persistent Threat (APT) attacks on network systems have increased through sophisticated fraud tactics. Traditional Intrusion Detection Systems (IDSs) suffer from low detection accuracy, high false-positive rates, and difficulty identifying unknown such as remote-to-local (R2L) user-to-root (U2R) attacks. This paper addresses these challenges by providing a foundational discussion of APTs the limitations existing methods. It then pivots to explore novel integration deep learning techniques Explainable Artificial Intelligence (XAI) improve APT detection. aims fill gaps in current research thorough analysis how XAI methods, Shapley Additive Explanations (SHAP) Local Interpretable Model-agnostic (LIME), can make black-box models more transparent interpretable. The objective is demonstrate necessity explainability propose solutions that enhance trustworthiness effectiveness models. offers critical approaches, highlights their strengths limitations, identifies open issues require further research. also suggests future directions combat evolving threats, paving way for effective reliable cybersecurity solutions. Overall, this emphasizes importance enhancing performance systems.
Language: Английский
Citations
10Deleted Journal, Journal Year: 2025, Volume and Issue: 7(5)
Published: May 14, 2025
Language: Английский
Citations
0Advances in computational intelligence and robotics book series, Journal Year: 2025, Volume and Issue: unknown, P. 431 - 460
Published: April 30, 2025
In today's competitive digital marketplace, where personalized and meaningful employee experiences are essential to building winning teams, the transparency provided by Explainable AI (XAI) plays a crucial role in fostering trust driving organizational success. As intelligent systems increasingly monitor evaluate performance, forward-thinking organizations must ensure that AI-driven processes fair, unbiased, privacy-conscious maintain their employer brand value. This chapter examines how XAI strategically transforms talent management, from recruitment through retention, enabling leverage insights while differentiating themselves as employers of choice. By integrating XAI, companies make informed decisions, mitigate bias, creating advantage marketplace. While offers significant benefits, challenges remain addressing algorithmic bias ensuring fairness. We explore these trends challenges, highlighting AI's evolving management its impact on positioning.
Language: Английский
Citations
0Artificial Intelligence Review, Journal Year: 2025, Volume and Issue: 58(8)
Published: May 3, 2025
Language: Английский
Citations
0Sensors, Journal Year: 2024, Volume and Issue: 24(24), P. 8039 - 8039
Published: Dec. 17, 2024
The hindering of Global Navigation Satellite Systems (GNSS) signal reception by jamming and spoofing attacks degrades the quality. Careful attention needs to be paid when post-processing under these circumstances before feeding into GNSS receiver’s stage. identification time domain statistical attributes spectral characteristics play a vital role in analyzing behaviour various kinds attacks, multipath scenarios. In this paper, records five disruptions (pure, continuous wave interference (CWI), multi-tone (MCWI), (MP), spoofing, pulse, chirp) are examined, most influential features both frequency domains identified with help explainable AI (XAI) models. Different Machine learning (ML) techniques were employed assess importance model’s prediction. From analysis, it has been observed that usage SHapley Additive exPlanations (SHAP) local interpretable model-agnostic explanations (LIME) models signals test types disruption unknown signals, using only best-correlated important training phase, provided better classification accuracy prediction compared traditional feature selection methods. This XAI model reveals black-box ML output provides clear explanation specific occurrences based on individual contributions. By revealer, we can easily analyze ground-station employ fault detection resilience diagnosis post-processing.
Language: Английский
Citations
22022 14th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), Journal Year: 2024, Volume and Issue: unknown, P. 1 - 7
Published: June 27, 2024
Language: Английский
Citations
0Published: Sept. 2, 2024
Language: Английский
Citations
0