FEASIBILITY OF USING LOW-PARAMETER LOCAL LLMS IN ANSWERING QUESTIONS FROM ENTERPRISE KNOWLEDGE BASE DOI Creative Commons
Marcin Badurowicz, Stanisław Skulimowski, Maciej LASKOWSKI

et al.

Applied Computer Science, Journal Year: 2024, Volume and Issue: 20(4), P. 175 - 191

Published: Dec. 31, 2024

This paper evaluates the feasibility of deploying locally-run Large Language Models (LLMs) for retrieval-augmented question answering (RAG-QA) over internal knowledge bases in small and medium enterprises (SMEs), with a focus on Polish-language datasets. The study benchmarks eight popular open-source source-available LLMs, including Google’s Gemma-9B Speakleash’s Bielik-11B, assessing their performance across closed, open, detailed types, metrics language quality, factual accuracy, response stability, processing efficiency. results highlight that desktop-class though limited accuracy (with top scores 45% 43% Gemma Bielik, respectively), hold promise early-stage enterprise implementations. Key findings include Bielik's superior open-ended questions Gemma's efficiency reliability closed-type queries. Distribution analyses revealed variability model outputs, Bielik showing most stable distributions. research underscores potential offline-capable LLMs as cost-effective tools secure management Polish SMEs.

Language: Английский

The Development of the Mobile Interactive Virtual Nuclear Educator With AR and RAG for Learning Nuclear Energy in Indonesia DOI
Rio Nurtantyana,

Sahara Eka Kencana Murni,

Halim Hamadi

et al.

Published: Oct. 9, 2024

Language: Английский

Citations

0

“TILSE” Framework for RAG-Based AIGC Feedback Prompts: A Modular and Personalized Intelligent Feedback Generation Method DOI

B Peng,

X. F. Wang,

Lei Xu

et al.

Published: Sept. 6, 2024

Language: Английский

Citations

0

FEASIBILITY OF USING LOW-PARAMETER LOCAL LLMS IN ANSWERING QUESTIONS FROM ENTERPRISE KNOWLEDGE BASE DOI Creative Commons
Marcin Badurowicz, Stanisław Skulimowski, Maciej LASKOWSKI

et al.

Applied Computer Science, Journal Year: 2024, Volume and Issue: 20(4), P. 175 - 191

Published: Dec. 31, 2024

This paper evaluates the feasibility of deploying locally-run Large Language Models (LLMs) for retrieval-augmented question answering (RAG-QA) over internal knowledge bases in small and medium enterprises (SMEs), with a focus on Polish-language datasets. The study benchmarks eight popular open-source source-available LLMs, including Google’s Gemma-9B Speakleash’s Bielik-11B, assessing their performance across closed, open, detailed types, metrics language quality, factual accuracy, response stability, processing efficiency. results highlight that desktop-class though limited accuracy (with top scores 45% 43% Gemma Bielik, respectively), hold promise early-stage enterprise implementations. Key findings include Bielik's superior open-ended questions Gemma's efficiency reliability closed-type queries. Distribution analyses revealed variability model outputs, Bielik showing most stable distributions. research underscores potential offline-capable LLMs as cost-effective tools secure management Polish SMEs.

Language: Английский

Citations

0