Published: June 6, 2024
Language: Английский
Published: June 6, 2024
Language: Английский
Technologies, Journal Year: 2025, Volume and Issue: 13(3), P. 119 - 119
Published: March 16, 2025
Efficient language analysis techniques and models are crucial in the artificial intelligence age for enhancing cross-lingual question answering. Transfer learning with state-of-the-art has been beneficial this regard, but performance of low-resource African languages morphologically rich grammatical structures unique typologies shown deficiencies linkable to evaluation scarce training data. To enhance former, paper proposes an pipeline leveraging semantic answer similarity method enhanced automatic annotation. The uses Language-agnostic BERT Sentence Embedding model integrated adapted vector measure perform text after prediction. Experimental results from multilingual-T5 AfroXLMR on nine AfriQA dataset surpassed existing benchmarks deploying string-based methods evaluation. also superior F1-score-based GPT4 Llama-2 performances same downstream task. annotation technique effectively reduced labelling time while maintaining a high performance. Thus, proposed is more efficient than prevailing F1 Exact Match metrics mixed type question–answer evaluations, it natural estimator targeting real-world deployment.
Language: Английский
Citations
0International Journal of Innovative Science and Research Technology (IJISRT), Journal Year: 2024, Volume and Issue: unknown, P. 470 - 480
Published: Oct. 19, 2024
This review explores recent advancements in Natural Language Understanding-driven Machine Translation (NLU-MT) with a focus on English and the low-resource dialectal Lusoga. A Low-resource language, such as Lusoga, faces significant challenges (MT) due to scarcity of high-quality parallel corpora, complex morphology inherent Bantu languages, variations within Lusoga itself, particularly between Lutenga Lupakoyo. paper examines role NLU-based MT systems overcoming these by shifting from word-for-word mapping meaning-based translations, enabling better handling differences. We highlight success leveraging linguistic similarities related Luganda, improve translation performance through multilingual transfer learning techniques. Key include use transformer-based architectures Multilingual Bidirectional Auto-Regressive Transformer (mBART) Text-To-Text Transfer (mT5), specifically selected for their effectiveness NLU-driven contexts, which have shown promise enhancing accuracy African languages. However, also identifies ongoing obstacles, including historical low demand lack well-developed hinder scalability. The concludes emphasizing potential hybrid approaches that combine community-driven corpus-building initiatives improved model drive further progress MT. Ultimately, NLU-MT is positioned crucial tool not only bridging communication gaps but preserving diversity cultural heritage.
Language: Английский
Citations
1Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)
Published: Dec. 28, 2024
Abstract Large Language Models (LLMs) are gaining significant popularity in recent years for specialized tasks using prompts due to their low computational cost. Standard methods like prefix tuning utilize special, modifiable tokens that lack semantic meaning and require extensive training best performance, often falling short. In this context, we propose a novel method called Semantic Knowledge Tuning (SK-Tuning) prompt employs meaningful words instead of random tokens. This involves fixed LLM understand process the content through zero-shot capabilities. Following this, it integrates processed with input text improve model’s performance on particular tasks. Our experimental results show SK-Tuning exhibits faster times, fewer parameters, superior such as classification understanding compared other methods. approach offers promising optimizing efficiency effectiveness LLMs processing language
Language: Английский
Citations
0Published: June 6, 2024
Language: Английский
Citations
0