Electronics, Journal Year: 2024, Volume and Issue: 13(23), P. 4835 - 4835
Published: Dec. 7, 2024
Despite the remarkable progress of large language models (LLMs) in understanding and generating unstructured text, their application structured data domains multi-role capabilities remain underexplored. In particular, utilizing LLMs to perform complex reasoning tasks on knowledge graphs (KGs) is still an emerging area with limited research. To address this gap, we propose KG-EGV, a versatile framework leveraging KG-based tasks. KG-EGV consists four core steps: sentence segmentation, graph retrieval, EGV, backward updating, each designed segment sentences, retrieve relevant KG components, derive logical conclusions. novel integrated for LLM inference, enables comprehensive beyond retrieval by synthesizing diverse evidence, which often unattainable via alone due noise or hallucinations. The incorporates six key stages: generation expansion, expansion evaluation, document re-ranking, re-ranking answer generation, verification. Within framework, take various roles, such as generator, re-ranker, evaluator, verifier, collaboratively enhancing precision coherence. By combining strengths retrieval-based generation-based achieves greater flexibility accuracy evidence gathering formulation. Extensive experiments widely used benchmarks, including FactKG, MetaQA, NQ, WebQ, TriviaQA, demonstrate that state-of-the-art performance quality, showcasing its potential advance QA research applications.
Language: Английский