FEASIBILITY OF USING LOW-PARAMETER LOCAL LLMS IN ANSWERING QUESTIONS FROM ENTERPRISE KNOWLEDGE BASE DOI Creative Commons
Marcin Badurowicz, Stanisław Skulimowski, Maciej LASKOWSKI

и другие.

Applied Computer Science, Год журнала: 2024, Номер 20(4), С. 175 - 191

Опубликована: Дек. 31, 2024

This paper evaluates the feasibility of deploying locally-run Large Language Models (LLMs) for retrieval-augmented question answering (RAG-QA) over internal knowledge bases in small and medium enterprises (SMEs), with a focus on Polish-language datasets. The study benchmarks eight popular open-source source-available LLMs, including Google’s Gemma-9B Speakleash’s Bielik-11B, assessing their performance across closed, open, detailed types, metrics language quality, factual accuracy, response stability, processing efficiency. results highlight that desktop-class though limited accuracy (with top scores 45% 43% Gemma Bielik, respectively), hold promise early-stage enterprise implementations. Key findings include Bielik's superior open-ended questions Gemma's efficiency reliability closed-type queries. Distribution analyses revealed variability model outputs, Bielik showing most stable distributions. research underscores potential offline-capable LLMs as cost-effective tools secure management Polish SMEs.

Язык: Английский

The Development of the Mobile Interactive Virtual Nuclear Educator With AR and RAG for Learning Nuclear Energy in Indonesia DOI
Rio Nurtantyana,

Sahara Eka Kencana Murni,

Halim Hamadi

и другие.

Опубликована: Окт. 9, 2024

Язык: Английский

Процитировано

0

“TILSE” Framework for RAG-Based AIGC Feedback Prompts: A Modular and Personalized Intelligent Feedback Generation Method DOI

B Peng,

X. F. Wang,

Lei Xu

и другие.

Опубликована: Сен. 6, 2024

Язык: Английский

Процитировано

0

FEASIBILITY OF USING LOW-PARAMETER LOCAL LLMS IN ANSWERING QUESTIONS FROM ENTERPRISE KNOWLEDGE BASE DOI Creative Commons
Marcin Badurowicz, Stanisław Skulimowski, Maciej LASKOWSKI

и другие.

Applied Computer Science, Год журнала: 2024, Номер 20(4), С. 175 - 191

Опубликована: Дек. 31, 2024

This paper evaluates the feasibility of deploying locally-run Large Language Models (LLMs) for retrieval-augmented question answering (RAG-QA) over internal knowledge bases in small and medium enterprises (SMEs), with a focus on Polish-language datasets. The study benchmarks eight popular open-source source-available LLMs, including Google’s Gemma-9B Speakleash’s Bielik-11B, assessing their performance across closed, open, detailed types, metrics language quality, factual accuracy, response stability, processing efficiency. results highlight that desktop-class though limited accuracy (with top scores 45% 43% Gemma Bielik, respectively), hold promise early-stage enterprise implementations. Key findings include Bielik's superior open-ended questions Gemma's efficiency reliability closed-type queries. Distribution analyses revealed variability model outputs, Bielik showing most stable distributions. research underscores potential offline-capable LLMs as cost-effective tools secure management Polish SMEs.

Язык: Английский

Процитировано

0