A comprehensive survey on integrating large language models with knowledge-based methods
Wenli Yang,
No information about this author
Lilian Some,
No information about this author
Michael Bain
No information about this author
et al.
Knowledge-Based Systems,
Journal Year:
2025,
Volume and Issue:
unknown, P. 113503 - 113503
Published: April 1, 2025
Language: Английский
Large language models for software vulnerability detection: a guide for researchers on models, methods, techniques, datasets, and metrics
Seyed Mohammad Taghavi Far,
No information about this author
Farid Feyzi
No information about this author
International Journal of Information Security,
Journal Year:
2025,
Volume and Issue:
24(2)
Published: Feb. 14, 2025
Language: Английский
RSCID: requirements selection considering interactions and dependencies
Mohammad Reza Keyvanpour,
No information about this author
Zahra Karimi Zandian,
No information about this author
Elham Sodagari
No information about this author
et al.
Genetic Programming and Evolvable Machines,
Journal Year:
2025,
Volume and Issue:
26(1)
Published: March 27, 2025
Language: Английский
SIFT: enhance the performance of vulnerability detection by incorporating structural knowledge and multi-task learning
Automated Software Engineering,
Journal Year:
2025,
Volume and Issue:
32(2)
Published: April 11, 2025
Language: Английский
Do LLMs consider security? an empirical study on responses to programming questions
Amirali Sajadi,
No information about this author
Binh Le,
No information about this author
Thu Anh Nguyen
No information about this author
et al.
Empirical Software Engineering,
Journal Year:
2025,
Volume and Issue:
30(3)
Published: April 16, 2025
Abstract
The
widespread
adoption
of
conversational
LLMs
for
software
development
has
raised
new
security
concerns
regarding
the
safety
LLM-generated
content.
Our
motivational
study
outlines
ChatGPT’s
potential
in
volunteering
context-specific
information
to
developers,
promoting
safe
coding
practices.
Motivated
by
this
finding,
we
conduct
a
evaluate
degree
awareness
exhibited
three
prominent
LLMs:
Claude
3,
GPT-4,
and
Llama
3.
We
prompt
these
with
Stack
Overflow
questions
that
contain
vulnerable
code
whether
they
merely
provide
answers
or
if
also
warn
users
about
insecure
code,
thereby
demonstrating
awareness.
Further,
assess
LLM
responses
causes,
exploits,
fixes
vulnerability,
help
raise
users’
findings
show
all
models
struggle
accurately
detect
vulnerabilities,
achieving
detection
rate
only
12.6%
40%
across
our
datasets.
observe
tend
identify
certain
types
vulnerabilities
related
sensitive
exposure
improper
input
neutralization
much
more
frequently
than
other
types,
such
as
those
involving
external
control
file
names
paths.
Furthermore,
when
do
issue
warnings,
often
on
compared
responses.
Finally,
an
in-depth
discussion
implications
findings,
demonstrated
CLI-based
prompting
tool
can
be
used
produce
secure
Language: Английский
Demystifying issues, causes and solutions in LLM open-source projects
Yangxiao Cai,
No information about this author
Peng Liang,
No information about this author
Yifei Wang
No information about this author
et al.
Journal of Systems and Software,
Journal Year:
2025,
Volume and Issue:
unknown, P. 112452 - 112452
Published: April 1, 2025
Language: Английский
Human-understandable explanation for software vulnerability prediction
Journal of Systems and Software,
Journal Year:
2025,
Volume and Issue:
unknown, P. 112455 - 112455
Published: April 1, 2025
Language: Английский
Exploring Large Language Models’ Ability to Describe Entity-Relationship Schema-Based Conceptual Data Models
Information,
Journal Year:
2025,
Volume and Issue:
16(5), P. 368 - 368
Published: April 29, 2025
In
the
field
of
databases,
Large
Language
Models
(LLMs)
have
recently
been
studied
for
generating
SQL
queries
from
textual
descriptions,
while
their
use
conceptual
or
logical
data
modeling
remains
less
explored.
The
design
relational
databases
commonly
relies
on
entity-relationship
(ER)
model,
where
translation
rules
enable
mapping
an
ER
schema
into
corresponding
tables
with
constraints.
Our
study
investigates
capability
LLMs
to
describe
in
natural
language
a
database
model
based
schema.
Whether
documentation,
onboarding,
communication
non-technical
stakeholders,
can
significantly
improve
process
explaining
by
accurate
descriptions
about
how
components
interact
as
well
represented
information.
To
guide
LLM
challenging
constructs,
specific
hints
are
defined
provide
enriched
Different
explored
(ChatGPT
3.5
and
4,
Llama2,
Gemini,
Mistral
7B)
different
metrics
(F1
score,
ROUGE,
perplexity)
used
assess
quality
generated
compare
LLMs.
Language: Английский
Characterizing Developers' Behaviors in LLM -Supported Software Development
2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC),
Journal Year:
2024,
Volume and Issue:
unknown, P. 1168 - 1177
Published: July 2, 2024
Language: Английский
Integrating Artificial Open Generative Artificial Intelligence into Software Supply Chain Security
Published: Oct. 23, 2024
While
new
technologies
emerge,
human
errors
always
looming.
Software
supply
chain
is
increasingly
complex
and
intertwined,
the
security
of
a
service
has
become
paramount
to
ensuring
integrity
products,
safeguarding
data
privacy,
maintaining
operational
continuity.
In
this
work,
we
conducted
experiments
on
promising
open
Large
Language
Models
(LLMs)
into
two
main
software
challenges:
source
code
language
deprecated
code,
with
focus
their
potential
replace
conventional
static
dynamic
scanners
that
rely
predefined
rules
patterns.
Our
findings
suggest
while
LLMs
present
some
unexpected
results,
they
also
encounter
significant
limitations,
particularly
in
memory
complexity
management
unfamiliar
Despite
these
challenges,
proactive
application
LLMs,
coupled
extensive
databases
continuous
updates,
holds
fortify
Supply
Chain
(SSC)
processes
against
emerging
threats.
Language: Английский