CyberSafe: A Gamified Cyber Security Training Method DOI

Obada Haitham Salman,

Moatsum Alawida,

Raneem Mohammad Abu Hazeem

et al.

2019 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), Journal Year: 2023, Volume and Issue: unknown, P. 1 - 6

Published: Dec. 30, 2023

In this paper, we address the growing concern of cyberattacks due to low awareness among computer users. Many organizations have started implementing workshops and training programs educate their employees users about cybersecurity. However, some these lack sufficient knowledge fail engage participants effectively. To overcome limitations, present a novel system called CyberSafe, developed using Unity game engine. CyberSafe comprises multiple levels training, each focusing on an important aspect cybersecurity that commonly encounter in daily lives. A player can choose one follow Nova AI's instructions. By providing hands-on learning experience, aims increase users' equip them with necessary skills navigate internet safely mitigate risk falling victim cyberattacks. We conducted pre- post-tests who interacted system, results demonstrate significant improvement ability detect different types attacks. Moreover, offers flexible, user-friendly, enjoyable approach enhancing

Language: Английский

Comparative Analysis of ChatGPT-4 and Google Gemini for Spam Detection on the SpamAssassin Public Mail Corpus DOI Creative Commons
Ketut Mardiansyah, Wayan Surya

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: March 4, 2024

Abstract This study addresses the critical challenge of spam detection in realm cybersecurity, motivated by escalating sophistication spamming techniques and their significant implications for communication security. With advent advanced artificial intelligence (AI) models, this research compares efficacy two leading ChatGPT-4 OpenAI Google Gemini, identifying within widely recognized SpamAssassin public mail corpus. Through a meticulous methodology that includes preprocessing dataset, application standardized evaluation metrics (accuracy, precision, recall, F1-score), detailed performance analysis, unveils distinct capabilities each model detection. demonstrates balanced with high precision making it suitable general tasks. In contrast, Gemini excels highlighting its potential scenarios where capturing maximum number emails is paramount, despite slightly higher tendency to misclassify legitimate as spam. These findings contribute valuable insights into comparative strengths contexts offering nuanced understanding roles enhancing mechanisms. The underscores significance selecting AI models based on specific needs sets foundation future aimed at advancing AI-driven solutions.

Language: Английский

Citations

18

Unraveling the dark side of ChatGPT: a moderated mediation model of technology anxiety and technostress DOI
Cong Doanh Duong, Thi Viet Nga Ngo,

The Anh Khuc

et al.

Information Technology and People, Journal Year: 2024, Volume and Issue: unknown

Published: May 17, 2024

Purpose Limited knowledge exists regarding the adverse effects of artificial intelligence adoption, including platforms like ChatGPT, on users’ mental well-being. The current research seeks to adopt insight from stressor-strain-outcome paradigm and a moderated mediation model examine how technology anxiety moderates direct indirect relationships between compulsive use technostress, life satisfaction. Design/methodology/approach Drawing data sample 2,602 ChatGPT users in Vietnam, PROCESS macro was approached test model. Findings findings indicate that exhibited substantial positive impact while technostress found have negative influence Moreover, although did not show significant effect, it indirectly impacts satisfaction via technostress. Remarkably, significantly moderate both associations Practical implications Based this research, some practical are provided. Originality/value offers fresh perspective by applying provide empirical evidence relationship thus sheds new light adoption its health.

Language: Английский

Citations

13

A Review of Advancements and Applications of Pre-Trained Language Models in Cybersecurity DOI
Zefang Liu

Published: April 29, 2024

In this paper, we delve into the transformative role of pre-trained language models (PLMs) in cybersecurity, offering a comprehensive examination their deployment across wide array cybersecurity tasks. Beginning with an exploration general PLMs, including advancements and emergence domain-specific tailored for provide insightful overview foundational technologies driving these developments. The core our review focuses on multifaceted applications PLMs ranging from malware vulnerability detection to more nuanced areas like log analysis, network traffic threat intelligence, among others. We also highlight recent strides application large (LLMs), showcasing growing influence enhancing measures. By charting landscape PLM pointing toward future directions, work serves as valuable resource both research community industry practitioners, underlining critical need continued innovation harnessing fortify defenses.

Language: Английский

Citations

9

Seven HCI Grand Challenges Revisited: Five-Year Progress DOI Creative Commons
Constantine Stephanidis,

Gavriel Salvendy,

Margherita Antona

et al.

International Journal of Human-Computer Interaction, Journal Year: 2025, Volume and Issue: unknown, P. 1 - 49

Published: Feb. 4, 2025

Motivated by the rapid technological advancements achieved in last five years and pervasiveness of Artificial Intelligence (AI), this paper investigates evolving role HCI revisits seven grand challenges outlined 2019: human-technology symbiosis, human-environment interactions, ethics, privacy security, well-being, health eudaimonia, accessibility universal access, learning creativity, social organization democracy. Through literature analysis, reevaluates status each challenge highlights emerging requirements. Key findings reveal widespread impact AI across all domains emphasize need for improved transparency, alignment with human values, development explainable, personalized, privacy-preserving technologies to enhance user trust control. The analysis also interconnected nature these firmly asserts technology actively supporting activities collaborating harmoniously humans help them live meaningfully fulfill aspirations.

Language: Английский

Citations

0

Text Mining Approaches for Exploring Research Trends in the Security Applications of Generative Artificial Intelligence DOI Creative Commons
J.Y. Kim,

Byeongsoo Koo,

Moonju Nam

et al.

Applied Sciences, Journal Year: 2025, Volume and Issue: 15(6), P. 3355 - 3355

Published: March 19, 2025

This study examines the security implications of generative artificial intelligence (GAI), focusing on models such as ChatGPT. As GAI technologies are increasingly integrated into industries like healthcare, education, and media, concerns growing regarding vulnerabilities, ethical challenges, potential for misuse. not only synthesizes existing research but also conducts an original scientometric analysis using text mining techniques. To address these concerns, this analyzes 1047 peer-reviewed academic articles from SCOPUS database methods, including Term Frequency–Inverse Document Frequency (TF-IDF) analysis, keyword centrality Latent Dirichlet Allocation (LDA) topic modeling. The results highlight significant contributions countries United States, China, India, with leading institutions Chinese Academy Sciences National University Singapore driving security. In “ChatGPT” emerged a highly central term, reflecting its prominence in discourse. However, despite frequent mention, showed lower proximity than terms “model” “AI”. suggests that while ChatGPT is broadly associated other key themes, it has less direct connection to specific subfields. Topic modeling identified six major AI language models, data processing, risk management. emphasizes need robust frameworks technical ensure responsibility, manage risks safe deployment systems. These must incorporate solutions accountability, regulatory compliance, continuous underscores importance interdisciplinary integrates technical, legal, perspectives responsible secure technologies.

Language: Английский

Citations

0

Generative AI and criminology: A threat or a promise? Exploring the potential and pitfalls in the identification of Techniques of Neutralization (ToN) DOI Creative Commons
Federico Pacchioni, Emma Flutti,

Palmina Caruso

et al.

PLoS ONE, Journal Year: 2025, Volume and Issue: 20(4), P. e0319793 - e0319793

Published: April 4, 2025

Generative artificial intelligence (AI) such as GPT-4 refers to systems able understand and generate new coherent relevant text by learning from existing data sets. The great opportunities that offers are accompanied risks. Indeed, the ease of access use a tool also makes it platform choice for malicious users. purpose this work is test machine’s capabilities in identifying reframing so-called Techniques Neutralization (ToN), rationalizations employed offenders justify their deviant behavior. identification theoretical cornerstone criminology interviews with crucial criminologists, provides information on criminodynamics, risk assessment possible intervention strategies. Our outcomes show high level accuracy general ToN recognition Published Crafted sentences both Test 1 (precision 0.82 recall 0.75 “Denial Injury” ToN, precision 0.93 “Absence ToN” ToN) 2 1.00 0.83 categories). Regarding reformulation remove (Test 3), model demonstrates success rates most categories credibility reformulated sentences, indicating its ability maintain integrity while removing ToN. analyses application machine respect previously untested construct, aim observing potential and, above all, pitfalls behind AI models hitherto little-explored context criminology.

Language: Английский

Citations

0

The Darkside of Artificial Intelligence and the Metaverse in Scientific Research and Publishing DOI
Wasswa Shafik

Advances in computational intelligence and robotics book series, Journal Year: 2025, Volume and Issue: unknown, P. 275 - 304

Published: April 24, 2025

This chapter explores the ethical, legal, and societal risks of Artifificial Intelligence (AI) Metaverse in scientific research publishing. While AI aids data analysis peer review, it perpetuating biases that could distort findings compromise integrity. The Metaverse, as a new digital space for academic engagement, introduces challenges like privacy, intellectual property concerns, opportunities fraud. Furthermore, algorithmic publishing amplify visibility disparities, creating divide. To address these issues, this advocates robust governance, ethical guidelines, collaborative frameworks to ensure fairness, integrity, trust evolving landscape. It is imperative know dangrous more than what we can stress terms its abilities, applications services, human race playing on self destraction trigger beyond horizons.

Language: Английский

Citations

0

Exploring the Dark Side: A Systematic Review of Generative AI’s Role in Network Attacks and Breaches DOI

Hamza Kurtović,

Esma Šabanović,

Ali Abd Almisreb

et al.

Lecture notes in networks and systems, Journal Year: 2025, Volume and Issue: unknown, P. 27 - 51

Published: Jan. 1, 2025

Language: Английский

Citations

0

ChatGPT's Impact on Ethical Hacking and Cybersecurity DOI

Rahaf Adam Alnuaimi,

Moatsum Alawida, Manal Al-Rawashdeh

et al.

Advances in computational intelligence and robotics book series, Journal Year: 2025, Volume and Issue: unknown, P. 573 - 608

Published: May 1, 2025

As cyber threats grow, leveraging tools like ChatGPT offers a strategic advantage in ethical hacking and cybersecurity. This chapter examines ChatGPT's potential enhancing skills through scenario-based learning, topic exploration, critical evaluation of its responses. A questionnaire with ten questions on tools, techniques, certifications, hacker psychology was answered by 20 experts. While feedback operating systems cybersecurity principles positive, some questioned the practicality moderate recommendations. Statistical analysis showed Cronbach's alpha 0.878, indicating high internal consistency, overall positive The underscores value tracking trends, methodologies while acknowledging practical limitations.

Language: Английский

Citations

0

Building Human-Centric Defenses Against Generative AI Threats in Developing Countries DOI
Faisal Aburub,

Saad Alateef

Advances in computational intelligence and robotics book series, Journal Year: 2025, Volume and Issue: unknown, P. 445 - 464

Published: May 1, 2025

Developing countries face heightened cybersecurity risks due to the accelerating adoption of generative AI tools. These technologies have created new vulnerabilities and widened existing protection gaps, especially in resource-constrained regions. This chapter investigates how rapid expansion intensifies threats emerging economies by examining malicious applications, unauthorized data generation, social engineering exploits. It aims uncover approaches that bolster defenses through human-centric methods responsible governance mechanisms. Analysis empirical studies indicates already affects many sectors, leaving individuals institutions exposed novel attack vectors. Findings suggest targeted policy actions capacity-building measures can mitigate these risks. The concludes underscoring human oversight ethical deployment as essential countermeasures. work contributes strategic guidance for safeguarding digitally evolving societies.

Language: Английский

Citations

0