Generating Phishing Attacks and Novel Detection Algorithms in the Era of Large Language Models DOI

Jeffrey Fairbanks,

Edoardo Serra

2021 IEEE International Conference on Big Data (Big Data), Год журнала: 2024, Номер unknown, С. 2314 - 2319

Опубликована: Дек. 15, 2024

Язык: Английский

From Vulnerability to Defense: The Role of Large Language Models in Enhancing Cybersecurity DOI Creative Commons

Wafaa Kasri,

Yassine Himeur, Hamzah Ali Alkhazaleh

и другие.

Computation, Год журнала: 2025, Номер 13(2), С. 30 - 30

Опубликована: Янв. 29, 2025

The escalating complexity of cyber threats, coupled with the rapid evolution digital landscapes, poses significant challenges to traditional cybersecurity mechanisms. This review explores transformative role LLMs in addressing critical cybersecurity. With landscapes and increasing sophistication security mechanisms often fall short detecting, mitigating, responding complex risks. LLMs, such as GPT, BERT, PaLM, demonstrate unparalleled capabilities natural language processing, enabling them parse vast datasets, identify vulnerabilities, automate threat detection. Their applications extend phishing detection, malware analysis, drafting policies, even incident response. By leveraging advanced features like context awareness real-time adaptability, enhance organizational resilience against cyberattacks while also facilitating more informed decision-making. However, deploying is not without challenges, including issues interpretability, scalability, ethical concerns, susceptibility adversarial attacks. critically examines foundational elements, real-world applications, limitations highlighting key advancements their integration into frameworks. Through detailed analysis case studies, this paper identifies emerging trends proposes future research directions, improving robustness, privacy automating management. study concludes by emphasizing potential redefine cybersecurity, driving innovation enhancing ecosystems.

Язык: Английский

Процитировано

9

Malware Reverse Engineering with Large Language Model for Superior Code Comprehensibility and IoC Recommendations DOI Creative Commons
Ashley Q. Williamson, Michael Beauparlant

Research Square (Research Square), Год журнала: 2024, Номер unknown

Опубликована: Май 27, 2024

Abstract Malware reverse engineering, the process of dissecting malicious software to understand its functionality and behavior, faces significant challenges due complexity obfuscation techniques employed by modern malware. The application Gemini Pro for interpreting reverse-engineered malware code introduces a novel approach enhancing understanding complex behaviors. By leveraging advanced natural language processing capabilities, model provides detailed accurate explanations malware's functional components, offering substantial improvements over traditional analysis methods. study demonstrates model's proficiency in identifying key operational mechanisms recommending relevant indicators compromise, which are crucial effective threat detection mitigation. A comprehensive comparative reveals that outperforms conventional static dynamic tools terms clarity, coherence, time efficiency. Detailed case studies various samples, including Ramnit, Kelihos, Lollipop, illustrate ability generate clear actionable insights, thereby facilitating better decision-making cybersecurity contexts. findings underscore potential integrating models into workflows significantly enhance efficiency effectiveness mitigation efforts.

Язык: Английский

Процитировано

11

Efficient Large Language Model Inference with Vectorized Floating Point Calculations DOI Open Access

Jacob Owens,

Skylar Matthews

Опубликована: Июнь 13, 2024

The development of highly sophisticated language models has revolutionized various natural processing tasks, demanding efficient inference processes to ensure real-time responsiveness and minimal computational resource usage. Vectorized floating point calculations present a novel significant approach enhancing the efficiency model inference, leveraging parallel capabilities achieve substantial performance improvements. This article details implementation vectorized within GPT-Neo, demonstrating notable 12\% increase in speed through comprehensive benchmarks datasets. evaluation highlights optimized model's ability reduce time, throughput, lower memory usage energy consumption without compromising accuracy. findings reveal potential operations enhance scalability operational advanced models, paving way for more responsive resource-efficient AI applications across diverse deployment scenarios.

Язык: Английский

Процитировано

10

Improving Learning Efficiency in Large Language Models through Shortcut Learning DOI Creative Commons

Amane Meibuki,

Renshu Nanao,

Mugen Outa

и другие.

Research Square (Research Square), Год журнала: 2024, Номер unknown

Опубликована: Июнь 14, 2024

Abstract Large-scale neural networks have demonstrated remarkable capabilities in natural language processing tasks, yet they often face challenges related to computational efficiency and scalability. The introduction of shortcut learning mechanisms offers a novel significant advancement by enhancing information flow reducing overhead, thereby improving model performance training speed. This research explores the integration into GPT-Neo architecture, resulting that exhibits faster convergence, higher accuracy, improved resource management. Through meticulous architectural modifications, such as residual connections, skip layers, gating mechanisms, modified achieved superior across various benchmarks, including GLUE, SQuAD, WMT, demonstrating its proficiency complex linguistic tasks. experimental results underscored model's robustness generalization capabilities, making it competitive alternative existing state-of-the-art models. Comprehensive evaluation metrics, F1 score, BLEU were used validate effectiveness proposed highlighting substantial improvements accuracy. study contributes significantly field artificial intelligence providing scalable efficient framework for design advanced LLMs, ultimately paving way more effective accessible AI technologies.

Язык: Английский

Процитировано

3

Automated Comparative Analysis of Visual and Textual Representations of Logographic Writing Systems in Large Language Models DOI Creative Commons

Peng Shao,

Ruichen Li,

Kai Qian

и другие.

Research Square (Research Square), Год журнала: 2024, Номер unknown

Опубликована: Авг. 16, 2024

Abstract The complex nature of logographic writing systems, characterized by their visually intricate characters and context-dependent meanings, presents unique challenges for computational models designed primarily alphabetic scripts. Understanding the ability LLMs to process scripts across visual textual input modalities is essential advancing application in multilingual contexts. novel approach presented this study systematically compares performance when interpreting as both data, offering new insights into semantic consistency accuracy model outputs these modalities. findings reveal critical disparities performance, particularly highlighting models' tendency favor inputs, which suggests need further refinement multimodal processing capabilities. Through detailed analysis error patterns, similarity, complexity, research demonstrates importance developing more robust versatile LLM architectures capable effectively managing inherent complexities systems. conclusions drawn from not only provide a deeper understanding limitations current but also set stage future innovations field, aiming enhance generalize diverse linguistic structures types.

Язык: Английский

Процитировано

3

Game-Theoretic Approaches for Step-wise Controllable Text Generation in Large Language Models DOI

Daniel Sefeni,

Michael Johnson,

Joshua Lee

и другие.

Authorea (Authorea), Год журнала: 2024, Номер unknown

Опубликована: Сен. 3, 2024

The growing reliance on AI-generated content across various industries necessitates robust methods for controlling the outputs of language models to ensure quality, relevance, and adherence ethical guidelines.Introducing a novel gametheoretic framework, this research establishes structured approach controllable text generation, enabling strategic manipulation model through adaptive prompt interventions.The study employed Mistral model, utilizing concepts Nash equilibrium feedback loops dynamically adjust strategies, optimizing balance between alignment, diversity, coherence.Experimental results demonstrated that different strategies distinctly influenced generated text, with direct prompts enhancing relevance interrogative promoting creative expression.Case studies further illustrated practical applications showcasing its adaptability generation tasks.The comparative analysis against traditional control highlighted superiority game-theoretic in achieving high-quality, controlled outputs.These findings demonstrate framework's potential enhance AIdriven offering significant implications human-AI collaboration, automated creation, deployment AI technologies.

Язык: Английский

Процитировано

3

Evaluating the Quality of Large Language Model-Generated Cybersecurity Advice in GRC Settings DOI Creative Commons

Zhiyuan Li,

Xiaoxi Wang,

Qingxiang Zhang

и другие.

Research Square (Research Square), Год журнала: 2024, Номер unknown

Опубликована: Июнь 21, 2024

Abstract The growing complexity and frequency of cybersecurity threats require innovative approaches to enhance Governance, Risk, Compliance (GRC) frameworks. Evaluating the quality advice generated by ChatGPT Google Gemini introduces a novel method harness artificial intelligence for more effective threat mitigation regulatory compliance. study reveals that generally outperforms across metrics such as relevance, accuracy, completeness, contextual appropriateness. Detailed comparative analysis, statistical evaluation, case studies demonstrate superior performance ChatGPT, while also highlighting areas improvement in both models. findings emphasize potential benefits integrating LLMs into GRC frameworks, provided their use is complemented with human expertise address nuanced challenges. This research offers valuable insights practical application AI cybersecurity, suggesting strategic directions future advancements.

Язык: Английский

Процитировано

2

Explainability of Large Language Models (LLMs) in Providing Cybersecurity Advice DOI Open Access

Keisuke Okutu,

Hakura Yumetoshi

Опубликована: Июнь 3, 2024

Artificial intelligence has transformed various domains, including cybersecurity, by introducing models capable of understanding and generating human language. The novel approach leveraging these to provide cybersecurity advice offers significant potential yet raises concerns about their explainability reliability. This research systematically investigates the ability advanced language distinguish between defensive offensive advice, examines impact excessive caution political correctness on quality recommendations, provides a comprehensive framework for evaluating performance. findings highlight strengths limitations current models, emphasizing need improved interpretability practical utility in AI-driven solutions. By proposing specific recommendations enhancements, study aims advance development more transparent, reliable, effective tools.

Язык: Английский

Процитировано

0

Elevating the Inference Performance of LLMs with Reverse Inference Federation DOI Creative Commons

Qinian Li,

Yuetian Gu

Research Square (Research Square), Год журнала: 2024, Номер unknown

Опубликована: Июнь 12, 2024

Abstract Natural language processing has seen impressive progress, driven by increasingly sophisticated models capable of performing complex linguistic tasks. The introduction reverse inference federation represents a novel and significant advancement in optimizing the performance these models, offering scalable solution that distributes computational workloads across multiple nodes. Detailed modifications to GPT-Neo architecture, coupled with innovative task allocation synchronization algorithms, have led substantial improvements speed, accuracy, resource utilization. Extensive experimentation rigorous statistical analysis validated effectiveness this approach, demonstrating its potential enhance efficiency scalability large models. By leveraging distributed computing techniques, addresses challenges associated real-time inference, providing robust framework ensures optimal utilization reduced latency. findings highlight transformative impact distributing tasks, setting new benchmark for optimization natural applications.

Язык: Английский

Процитировано

0

Optimizing Large Language Model Scaling with Micro Batch Pipeline and Inference Parallelism DOI Creative Commons

Doudou Quan,

R. Wang,

Zhu Lian

и другие.

Research Square (Research Square), Год журнала: 2024, Номер unknown

Опубликована: Июнь 14, 2024

Abstract Natural language processing has seen transformative progress with the development of sophisticated models capable generating and understanding human high accuracy. The novel concept integrating micro batch pipeline inference parallelism represents a significant leap in optimizing scalability efficiency these models. Through comprehensive experimentation modified GPT-Neo, substantial improvements were achieved throughput, latency, perplexity, BLEU scores, highlighting effectiveness proposed methodologies. enhanced model demonstrated superior performance large datasets, maintaining accuracy quality outputs, thereby addressing critical bottlenecks computational load resource constraints. study demonstrates potential advanced techniques revolutionizing training deployment, contributing valuable insights into future natural artificial intelligence.

Язык: Английский

Процитировано

0