Innovations in Introductory Programming Education: The Role of AI with Google Colab and Gemini DOI Creative Commons
Joe Llerena-Izquierdo, Johan Méndez Reyes, Raquel Ayala Carabajo

et al.

Education Sciences, Journal Year: 2024, Volume and Issue: 14(12), P. 1330 - 1330

Published: Dec. 4, 2024

This study explores the impact of artificial intelligence on teaching programming, focusing GenAI Gemini tool in Google Colab. It evaluates how this technology influences comprehension fundamental concepts, processes, and effective practices. In research, students’ motivation, interest, satisfaction are determined, as well fulfillment surpassing their learning expectations. With a quantitative approach quasi-experimental design, an investigation was carried out seven programming groups polytechnic university Guayaquil, Ecuador. The results reveal that use significantly increases interest with 91% respondents expressing increased enthusiasm. addition, 90% feel integration meets expectations, it has exceeded those expectations terms educational support. evidences value integrating advanced technologies into education, suggesting can transform programming. However, successful implementation depends timely training educators, ethics for students, ongoing technology, curriculum design maximizes capabilities GenAI.

Language: Английский

Reducing Hallucinations in Large Language Models Through Contextual Position Encoding DOI Open Access

Sarah Desrochers,

James Wilson,

Matthew Beauchesne

et al.

Published: May 31, 2024

In natural language processing, maintaining factual accuracy and minimizing hallucinations in text generation remain significant challenges. Contextual Position Encoding (CPE) presents a novel approach by dynamically encoding positional information based on the context of each token, significantly enhancing model's ability to generate accurate coherent text. The integration CPE into Mistral Large model resulted marked improvements precision, recall, F1-score, demonstrating superior performance over traditional methods. Furthermore, enhanced architecture effectively reduced hallucination rates, increasing reliability generated outputs. Comparative analysis with baseline models such as GPT-3 BERT confirmed efficacy CPE, highlighting its potential influence future developments LLM architecture. results underscore importance advanced techniques improving applicability large across various domains requiring high accuracy.

Language: Английский

Citations

20

Combining LoRA to GPT-Neo to Reduce Large Language Model Hallucination DOI Creative Commons

Shi-han Huang,

Chia-Yu Chen

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: June 4, 2024

Abstract The deployment of Large Language Models (LLMs) often suffers from generating hallucinations, leading to outputs that appear plausible but are factually inaccurate or nonsensical. Incorporating Low-Rank Adaptation (LoRA) into GPT-Neo presents a novel approach mitigating these hallucinations by leveraging the efficiency low-rank approximations. This research details integration LoRA GPT-Neo, demonstrating significant improvements in predictive performance, factual accuracy, and reduction hallucination rates. augmented model shows enhanced robustness efficiency, making it more suitable for applications requiring high accuracy reliability. Through comprehensive evaluations involving perplexity, BLEU, ROUGE-L scores, qualitative analysis, study highlights model's ability generate coherent contextually appropriate text. findings demonstrate potential transform LLM reducing computational complexity memory footprint, thus facilitating use large-scale models resource-constrained environments. advancement opens new possibilities across various domains, ensuring coherence generated content.

Language: Английский

Citations

15

Dynamic Supplementation of Federated Search Results for Reducing Hallucinations in LLMs DOI Open Access
Jichang Chen,

Xinnan Huang,

Yongping Li

et al.

Published: June 6, 2024

The increasing use of AI-generated content has highlighted the critical issue hallucinations, where models produce factually incorrect or misleading outputs. Addressing this challenge, a novel approach dynamically supplements federated search engine results in real-time to significantly reduce hallucinations and enhance response accuracy. methodology involves integrating data from multiple engines into responses generated by Mistral Large model, thereby providing more accurate contextually appropriate output. Comprehensive evaluation using Microsoft PromptBench dataset demonstrates substantial improvements accuracy, relevance, reduction hallucinations. Quantitative performance metrics, statistical analysis, detailed case studies confirm effectiveness dynamic supplementation approach. findings suggest significant implications for developing reliable AI applications across various domains, emphasizing potential hybrid systems that combine strengths large language information retrieval. Future research directions include refining triggering mechanisms, expanding sources, optimizing process further scalability.

Language: Английский

Citations

12

Knowledge Accuracy and Reducing Hallucinations in LLMs via Dynamic Domain Knowledge Injection DOI Creative Commons

Roman Capellini,

Frank Atienza,

Melanie Sconfield

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: June 7, 2024

Abstract Natural language processing has seen substantial progress with the development of highly sophisticated models capable understanding and generating human-like text. However, a persistent challenge remains in enhancing accuracy these when dealing domain-specific knowledge, particularly avoiding hallucinations or plausible but incorrect information. The dynamic domain knowledge injection mechanism introduced this research represents significant advancement by allowing continuous integration prioritisation specialised information, thereby improving model's performance reliability. By dynamically adjusting hidden weights GPT-Neo based on relevance accuracy, modified model achieved higher precision, recall, F1-scores, exhibited reduced hallucination rates across diverse domains such as cybersecurity, medical financial data, legal documents. A comprehensive evaluation framework, including benchmark creation metrics, validated effectiveness approach, demonstrating that can substantially enhance utility large fields. results highlight transformative potential method, offering robust pathway for more accurate contextually aware models. Detailed analysis ablation studies further elucidate contributions each component within modification process, providing critical insights into optimisation future applications innovative approach.

Language: Английский

Citations

10

Evaluating Abstract Reasoning and Problem-Solving Abilities of Large Language Models Using Raven's Progressive Matrices DOI Creative Commons

C. C. Zhang,

Liuyun Wang

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: June 11, 2024

Abstract Artificial intelligence has rapidly evolved, leading to the development of powerful models capable performing complex cognitive tasks. Evaluating abilities these through established human tests such as Raven's Progressive Matrices (RPM) offers a novel and significant approach understanding their abstract reasoning capabilities. The study adapted RPM for text-based interactions, enabling evaluation Mistral Llama without intervention. Results revealed that both surpass average performance in overall accuracy, demonstrating advanced problem-solving skills. However, analysis also highlighted variability across different types tasks, with excelling sequential pattern recognition showing weaknesses spatial awareness. These findings provide valuable insights into strengths limitations Llama, offering comprehensive guiding future advancements artificial intelligence.

Language: Английский

Citations

3

Measuring the IQ of Mainstream Large Language Models in Chinese Using the Wechsler Adult Intelligence Scale DOI Creative Commons

Jingjing Huang,

Ou Li

Published: June 7, 2024

Artificial intelligence continues to revolutionize various domains, with large language models (LLMs) pushing the boundaries of what machines can understand and generate. Evaluating intellectual linguistic capabilities LLMs using standardized tests like Wechsler Adult Intelligence Scale (WAIS) provides a novel significant approach understanding their cognitive strengths limitations. This research presents comprehensive evaluation Baidu Ernie OpenAI ChatGPT, comparing performance in IQ Chinese tasks. The assessments revealed that ChatGPT achieved marginally higher composite score, excelling particularly verbal comprehension working memory. demonstrated superior cultural appropriateness accuracy, reflecting its strong alignment context. study involved translating WAIS into Chinese, integrating multimodal inputs, applying rigorous statistical analyses ensure robust reliable results. findings demonstrate distinct each model, showing versatility handling diverse textual data culturally relevant grammatically precise responses. implications for future development emphasize importance contextually training integration enhance performance. framework offers valuable insights advancing artificial intelligence, guiding towards more intelligent, adaptable, aware models.

Language: Английский

Citations

2

Efficient Conceptual Knowledge Removal in Large Language Models: Methods and Evaluations DOI Creative Commons

Miyim Dimitriou,

Daniel Rogowski,

Michael C. Anderson

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: Oct. 8, 2024

Abstract The increasing use of deep neural networks has led to models that accumulate vast amounts knowledge from their training data, often retaining outdated or biased information needs be selectively removed. Novel techniques are required efficiently erase specific conceptual these while maintaining overall performance and avoiding computationally expensive re-training processes. This paper introduces a scalable framework for removal through targeted weight modification sparse fine-tuning, demonstrating how representations can isolated erased without significant degradation the model's broader capabilities. methodology achieves high precision in suppression by leveraging probing gradient-based optimization, ensuring minimal disruption general task performance. Extensive experimental evaluations confirm effectiveness proposed approach, highlighting its application scenarios where adaptive model refinement is essential both accuracy ethical integrity. Contributions field include development flexible efficient mechanism erasure, applicable across various architectures, minimizes computational overhead enhancing responsiveness dynamic requirements.

Language: Английский

Citations

2

Enhancing Contextual Understanding in Large Language Models with Dynamic Dependency Structures: A Methodological Approach DOI Creative Commons

Maki Ito,

H Nishikawa,

Yuna Sakamoto

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: July 30, 2024

Abstract The sophisticated machine learning models transformed the ability to understand and generate human language, yet challenges remain in maintaining contextual coherence relevance over extended sequences. Introducing dynamic dependency structures into GPT-Neo represents a significant advancement, enabling real-time adaptation of syntactic relationships based on evolving context, thereby enhancing model's performance generating contextually appropriate coherent text. integration context-aware updater reinforcement techniques has demonstrated substantial improvements both quantitative metrics such as perplexity BLEU scores qualitative evaluations. This research details implementation evaluation modified model, showcasing its superior capabilities tasks like translation text summarization. findings highlight potential address limitations traditional fixed frameworks, offering robust methodological advancement for more language modeling. By capture complex relevant information, proposed approach paves way development advanced AI systems capable performing processing with greater accuracy fluency.

Language: Английский

Citations

0

Innovations in Introductory Programming Education: The Role of AI with Google Colab and Gemini DOI Creative Commons
Joe Llerena-Izquierdo, Johan Méndez Reyes, Raquel Ayala Carabajo

et al.

Education Sciences, Journal Year: 2024, Volume and Issue: 14(12), P. 1330 - 1330

Published: Dec. 4, 2024

This study explores the impact of artificial intelligence on teaching programming, focusing GenAI Gemini tool in Google Colab. It evaluates how this technology influences comprehension fundamental concepts, processes, and effective practices. In research, students’ motivation, interest, satisfaction are determined, as well fulfillment surpassing their learning expectations. With a quantitative approach quasi-experimental design, an investigation was carried out seven programming groups polytechnic university Guayaquil, Ecuador. The results reveal that use significantly increases interest with 91% respondents expressing increased enthusiasm. addition, 90% feel integration meets expectations, it has exceeded those expectations terms educational support. evidences value integrating advanced technologies into education, suggesting can transform programming. However, successful implementation depends timely training educators, ethics for students, ongoing technology, curriculum design maximizes capabilities GenAI.

Language: Английский

Citations

0