This
study
proposes
a
hybrid
method
using
two
types
of
large
language
models
(LLM)
for
prompt
response
to
defect
complaints
by
exploring
rapidly
potential
causes
and
repair
methods
via
machine
reading
comprehension
(MRC)
tasks.
Although
numerous
past
maintenance
records
guidelines
offer
valuable
insights
into
or
newly
reported
defects,
manually
reviewing
all
data
is
impractical
due
the
significant
time
effort
required.
MRC
natural
processing
(NLP)
task
that
trains
read
extensive
texts
answer
questions.
While
recent
state-of-the-art
(SOTA)
LLMs,
as
they
are,
exhibit
high
performance
general
questions,
falter
in
specialized
domains
require
fine-tuning.
However,
generating
question-answer
(QA)
datasets
fine-tuning
time-consuming,
taking
over
200
days
with
crowdsourcing.
Furthermore,
many
companies
restrict
LLM
usage
daily
tasks
leakage
risks.
To
mitigate
these
challenges,
this
introduces
approach
wherein
Bidirectional
Encoder
Representations
from
Transformers
(BERT)
fine-tuned
QA
datasets,
automatically
generated
Generative
Pre-trained
Transformer
(GPT)
publicly
available
construction
guidelines.
The
GPT-applied
part
proposed
2,548
pairs
seven
half
hours,
significantly
reducing
dataset
generation
time.
For
MRC,
BERT
achieved
competitive
highest
F1
score
88.0%,
outperforming
Korean
benchmark's
(68.5%).
contributes
reduced
cost
resources
constructing
domain-specific
performing
efficient
complaint
within
data-secure
environment.
ACM Transactions on Intelligent Systems and Technology,
Год журнала:
2024,
Номер
15(4), С. 1 - 24
Опубликована: Апрель 17, 2024
The
task
of
machine
reading
comprehension
(MRC)
is
to
enable
read
and
understand
a
piece
text
then
answer
the
corresponding
question
correctly.
This
requires
not
only
be
able
perform
semantic
understanding
but
also
possess
logical
reasoning
capabilities.
Just
like
human
reading,
it
involves
thinking
about
from
two
interacting
perspectives
semantics
logic.
However,
previous
methods
based
on
either
consider
structure
or
cannot
simultaneously
balance
reasoning.
single
form
make
fully
meaning
text.
Additionally,
issue
sparsity
in
composition
presents
significant
challenge
for
models
that
rely
graph-based
To
this
end,
cross-graph
knowledge
propagation
network
(CGKPN)
with
adaptive
connection
presented
address
above
issues.
model
first
performs
self-view
node
embedding
constructed
graph
update
representations
graphs.
Specifically,
relevance
matrix
between
nodes
introduced
adaptively
adjust
connections
response
posed
by
sparse
graph.
Subsequently,
CGKPN
conducts
are
identical
both
graphs,
effectively
resolving
conflicts
arising
different
views,
enabling
better
integrate
relationships
through
efficient
interaction.
Experiments
MRC
datasets
ReClor
LogiQA
indicate
superior
performance
our
proposed
compared
other
existing
baselines.