Ethical Implications of Artificial Intelligence: Ensuring Patient Data Security DOI
Ahtisham Ali

Approaches to global sustainability, markets, and governance, Journal Year: 2024, Volume and Issue: unknown, P. 149 - 164

Published: Jan. 1, 2024

Language: Английский

Generative AI for Threat Intelligence and Information Sharing DOI
Siva Raja Sindiramutty,

Krishna Raj V. Prabagaran,

N. Z. Jhanjhi

et al.

Advances in digital crime, forensics, and cyber terrorism book series, Journal Year: 2024, Volume and Issue: unknown, P. 191 - 234

Published: Sept. 12, 2024

Collaboration in providing threat intelligence and disseminating information enables cyber security professionals to embrace digital most successfully, whose risks are ever-changing. This article dwells on the capacity of machine change by categorising indicators compromise (IOC) actors, then highlights limits traditional methods. Among Artificial tools such as generative adversarial networks (GANs) Variational autoencoders (VAEs), which key innovators, one can create synthetic or fake data that emulates real attack scenarios past. allows cyber-related be analysed differently from before. In addition, this feature secure stakeholder collaborations. It is also meant mainly for factual protects private but exchange helpful information. clear fact showcasing real-world examples demonstrates Al's automation through cybersecurity detection.

Language: Английский

Citations

1

Generative AI in Network Security and Intrusion Detection DOI
Siva Raja Sindiramutty,

Krishna Raj V. Prabagaran,

N. Z. Jhanjhi

et al.

Advances in information security, privacy, and ethics book series, Journal Year: 2024, Volume and Issue: unknown, P. 77 - 124

Published: July 26, 2024

Protecting virtual assets from cyber threats is essential as we live in a digitally advanced world. Providing responsible emphasis on proper network security and intrusion detection imperative. On the other hand, traditional strategies need supportive tool to adapt transforming threat space. New generative AI techniques like adversarial networks (GANs) variational autoencoders (VAEs) are mainstream technologies required meet gap. This chapter deals with how these models can enhance by inspecting traffic for anomalies malicious behaviors detected through unsupervised learning, which considers strange or emerging phenomena. survey features innovations fault detection, behavior control, deep packet inspection, classification, examples of real-world intrusions GAN-based systems. Furthermore, focuses challenges attacks that require development solid defense mechanisms, such networks. Ethics becomes following matter our list discussions, given privacy transparency accountability be observed when working security. Finally, authors examine trends determine cyber-attacks dealt comprehensively.

Language: Английский

Citations

1

Overview of Generative AI Techniques for Cybersecurity DOI
Siva Raja Sindiramutty,

Krishna Raj V. Prabagaran,

Rehan Akbar

et al.

Advances in information security, privacy, and ethics book series, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 52

Published: July 26, 2024

Generative AI techniques have been popular since they can generate data or content that could be hardly distinguished from genuine ones. This chapter comprehensively reviews generative for cybersecurity and its definition, history, applications in different fields. It covers basic ideas such as models, probability distributions, latent spaces. Also, it goes into more detail on some of the approaches like GANs, VAEs, combination RL. The explores structure training processes GANs VAEs demonstrates their application tasks image synthesis, enhancement, novelty detection. interaction between RL models challenges, including exploration-exploitation trade-off. focuses development with help DL analyses benefits deep usage various Evaluation measures problems measuring are discussed, focusing methods improving measurement accuracy. Finally, new directions, transformer-based self-supervised learning, to look at future AI. emphasis is made understanding these due versatility, about possible further developments findings other fields studies provided.

Language: Английский

Citations

1

Risk Assessment and Mitigation With Generative AI Models DOI
Siva Raja Sindiramutty, N. Z. Jhanjhi,

Rehan Akbar

et al.

Advances in digital crime, forensics, and cyber terrorism book series, Journal Year: 2024, Volume and Issue: unknown, P. 29 - 82

Published: Sept. 12, 2024

Cybersecurity organisations constantly face a risky environment where threats are present. These dangers can jeopardise information, disrupt business operations, and erode trust. Risk assessment mitigation strategies crucial to tackling these challenges effectively. However, traditional approaches often need help keep pace with the changing landscape of cyber that require judgments based on manual analysis. This section delves into how adoption AI techniques, like generative adversarial networks (GANs) or variational autoencoders (VAEs), transform risk methods by simulating scenarios identify anomalies more efficiently than ever before predicting potential future risks in real-time through unsupervised learning methods. By integrating threat intelligence models, authors improve understanding contextual factors abnormal high-risk behaviours.

Language: Английский

Citations

0

Ethics and Transparency in Secure Web Model Generation DOI
Siva Raja Sindiramutty,

Krishna Raj V. Prabagaran,

N. Z. Jhanjhi

et al.

Advances in information security, privacy, and ethics book series, Journal Year: 2024, Volume and Issue: unknown, P. 411 - 464

Published: July 26, 2024

The chapter discusses how ethics and transparency relate to creating secure web models for AI. AI plays a role in development, the authors consider as two critical aspects of this subject, which influences users, stakeholders, or society. examination begins with principles, include fairness, accountability, privacy requirements. They then get into problems models. In chapter, they break down bias fairness concerns at source find ways resolve them This relates trust where explainability are highlighted. also provide case studies showing effectiveness transparent explainable increasing user engagement. delve decision-making frameworks help navigate ethical dilemmas development. It represents conversation on atmospherics empowerment tools, such monitoring evaluation guidelines mobilisation implementation practice governance. To sum up, underline views us do AI-driven Therefore, urge all stakeholders make cornerstones responsible webs.

Language: Английский

Citations

0

Generative AI for Secure User Interface (UI) Design DOI
Siva Raja Sindiramutty,

Krishna Raj V. Prabagaran,

Rehan Akbar

et al.

Advances in information security, privacy, and ethics book series, Journal Year: 2024, Volume and Issue: unknown, P. 333 - 394

Published: July 26, 2024

Generative AI, which is equipped with unique capabilities, about to put the world of secure user interface (UI) design upside down and turn it into something full endless possibilities in users will be able use same opportunities experienced solutions protect their interaction digital from any future security threats. This chapter takes a deep plunge merger generative AI design, on whole, presenting complete exposition principals involved, methodologies applied, practical embodiment, ultimate ramifications. The beginning explore building blocks UI principles user-centred iterative approach, wherein robust framework for understanding as critical part secure, intuitive, engaging experiences implemented. Further, provides an overview different types approaches that could deployed such GANs, VAEs, autoregressive models, capabilities expanding scope measures, include authentication protocols, encryption, access rights while retaining usability aesthetic appeal. Moreover, surveys instance applications support Secure GUI, among automatic generation safe layout patterns, dynamic change according emerging threats, creation cryptographic keys symbols.

Language: Английский

Citations

0

A Deep Learning Approach for Healthcare Insurance Fraud Detection DOI Creative Commons

Precious Sihle Shungube,

Tebogo Bokaba, Patrick Ndayizigamiye

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 13, 2024

Abstract Healthcare fraud is a global financial challenge affecting economic stability and trust in services, with traditional machine learning models struggling to accurately capture its complexity adaptive nature. This study investigates the application of three deep (DL) models, which are artificial neural networks (ANN), convolutional (CNN) long-short-term memory (LSTM) for healthcare detection. used claim data, including patient demographics, amounts, diagnostic codes, procedure types, analyse service usage identify fraudulent activity. To enhance interpretability these locally interpretable model-agnostic explanations (LIME) were used. The evaluation results demonstrated that ANN was best performer an accuracy 0.94, precision 0.78, recall 0.45, F1-score 0.57. While CNN excelled accuracy, LSTM more effective reducing false negatives. LIME shows prediction be non-fraudulent high probability 0.96, as opposed 0.03 being ‘PotentialFraud', driving feature, metrics show it good at correctly identifying cases. highlights efficacy integrating explainable AI (XAI), contributing growing research body insurance

Language: Английский

Citations

0

Ethical Implications of Artificial Intelligence: Ensuring Patient Data Security DOI
Ahtisham Ali

Approaches to global sustainability, markets, and governance, Journal Year: 2024, Volume and Issue: unknown, P. 149 - 164

Published: Jan. 1, 2024

Language: Английский

Citations

0