Опубликована: Окт. 28, 2024
Язык: Английский
Опубликована: Окт. 28, 2024
Язык: Английский
Computation, Год журнала: 2025, Номер 13(2), С. 30 - 30
Опубликована: Янв. 29, 2025
The escalating complexity of cyber threats, coupled with the rapid evolution digital landscapes, poses significant challenges to traditional cybersecurity mechanisms. This review explores transformative role LLMs in addressing critical cybersecurity. With landscapes and increasing sophistication security mechanisms often fall short detecting, mitigating, responding complex risks. LLMs, such as GPT, BERT, PaLM, demonstrate unparalleled capabilities natural language processing, enabling them parse vast datasets, identify vulnerabilities, automate threat detection. Their applications extend phishing detection, malware analysis, drafting policies, even incident response. By leveraging advanced features like context awareness real-time adaptability, enhance organizational resilience against cyberattacks while also facilitating more informed decision-making. However, deploying is not without challenges, including issues interpretability, scalability, ethical concerns, susceptibility adversarial attacks. critically examines foundational elements, real-world applications, limitations highlighting key advancements their integration into frameworks. Through detailed analysis case studies, this paper identifies emerging trends proposes future research directions, improving robustness, privacy automating management. study concludes by emphasizing potential redefine cybersecurity, driving innovation enhancing ecosystems.
Язык: Английский
Процитировано
9Cybersecurity, Год журнала: 2025, Номер 8(1)
Опубликована: Фев. 5, 2025
Abstract The rapid development of large language models (LLMs) has opened new avenues across various fields, including cybersecurity, which faces an evolving threat landscape and demand for innovative technologies. Despite initial explorations into the application LLMs in there is a lack comprehensive overview this research area. This paper addresses gap by providing systematic literature review, covering analysis over 300 works, encompassing 25 more than 10 downstream scenarios. Our three key questions: construction cybersecurity-oriented LLMs, to cybersecurity tasks, challenges further study aims shed light on extensive potential enhancing practices serve as valuable resource applying field. We also maintain regularly update list practical guides at https://github.com/tmylla/Awesome-LLM4Cybersecurity .
Язык: Английский
Процитировано
5Frontiers of Computer Science, Год журнала: 2025, Номер 19(10)
Опубликована: Янв. 28, 2025
Язык: Английский
Процитировано
1Опубликована: Янв. 1, 2025
Язык: Английский
Процитировано
0Sensors, Год журнала: 2025, Номер 25(5), С. 1318 - 1318
Опубликована: Фев. 21, 2025
Large Language Models (LLMs), like GPT and BERT, have significantly advanced Natural Processing (NLP), enabling high performance on complex tasks. However, their size computational needs make LLMs unsuitable for deployment resource-constrained devices, where efficiency, speed, low power consumption are critical. Tiny (TLMs), also known as BabyLMs, offer compact alternatives by using compression optimization techniques to function effectively devices such smartphones, Internet of Things (IoT) systems, embedded platforms. This paper provides a comprehensive survey TLM architectures methodologies, including key knowledge distillation, quantization, pruning. Additionally, it explores potential emerging applications TLMs in automation control, covering areas edge computing, IoT, industrial automation, healthcare. The discusses challenges unique TLMs, trade-offs between model accuracy, limited generalization, ethical considerations deployment. Future research directions proposed, focusing hybrid techniques, application-specific adaptations, context-aware optimized hardware-specific constraints. aims serve foundational resource advancing capabilities across diverse real-world applications.
Язык: Английский
Процитировано
0Materials Genome Engineering Advances, Год журнала: 2025, Номер unknown
Опубликована: Март 12, 2025
Abstract Large language models (LLMs) excel at extracting information from literatures. However, deploying LLMs necessitates substantial computational resources, and security concerns with online pose a challenge to their wider applications. Herein, we introduce method for scientific data unstructured texts using local LLM, exemplifying its applications literatures on the topic of on‐surface reactions. By combining prompt engineering multi‐step text preprocessing, show that LLM can effectively extract information, achieving recall rate 91% precision 70%. Moreover, despite significant differences in model parameter size, performance is comparable GPT‐3.5 turbo (81% recall, 84% precision) GPT‐4o (85% 87% precision). The simplicity, versatility, reduced requirements, enhanced privacy makes it highly promising mining, potential accelerate application development across various fields.
Язык: Английский
Процитировано
0Journal of Industrial Information Integration, Год журнала: 2025, Номер unknown, С. 100834 - 100834
Опубликована: Март 1, 2025
Язык: Английский
Процитировано
0Methods, Год журнала: 2025, Номер unknown
Опубликована: Апрель 1, 2025
Язык: Английский
Процитировано
0Applied Sciences, Год журнала: 2025, Номер 15(8), С. 4177 - 4177
Опубликована: Апрель 10, 2025
In-context learning (ICL) enables large language models (LLMs) to adapt new tasks using only a few examples, without requiring fine-tuning. However, the privacy and security risks brought about by this increasing capability have not received enough attention, there is lack of research on issue. In work, we propose novel membership inference attack (MIA) method, termed Neighborhood Deviation Attack, specifically designed evaluate LLMs in ICL. Unlike traditional MIA methods, our approach does require access model parameters instead relies solely analyzing model’s output behavior. We first generate neighborhood prefixes for target samples use LLM, conditioned ICL complete text. then compute deviation between original completed texts infer based these deviations. conduct experiments three datasets further explore influence key hyperparameters method’s performance their underlying reasons. Experimental results show that method significantly better than comparative methods terms stability achieves accuracy most cases. Furthermore, discuss four potential defense strategies, including diversity examples introducing controlled randomness process reduce risk leakage.
Язык: Английский
Процитировано
0Intelligent Systems with Applications, Год журнала: 2025, Номер unknown, С. 200515 - 200515
Опубликована: Апрель 1, 2025
Язык: Английский
Процитировано
0