ChampionNet: a transformer-enhanced neural architecture search framework for athletic performance prediction and training optimization DOI Creative Commons
Lei Chang, Shalli Rani, Muhammad Azeem Akbar

et al.

Deleted Journal, Journal Year: 2025, Volume and Issue: 28(1)

Published: May 7, 2025

Abstract Neural architecture search (NAS) has emerged as a promising approach for automating deep learning model design. However, its application in sports analytics faces unique challenges due to the complex interplay between biomechanical patterns, physiological adaptations, and coaching expertise. Traditional NAS methods need help effectively capture multifaceted nature of athletic performance, often failing integrate qualitative insights with quantitative measurements. We introduce ChampionNet, framework incorporating large language models enhance accuracy predicting performance tailoring training regimens. Our offers three primary contributions: integrating hyperdimensional embedding fine-grained features parameters exceptional detail, structure-preserving graph encoding leverages maintain crucial spatiotemporal relationships movements, novel comprehensiveness that forward prediction backward adaptation pathways. experiments on various demonstrate ChampionNet outperforms other by 2.5% over 61.9% computational cost. Further illustrate framework's patterns multi-modal data, especially advanced needs. These findings support ChampionNet's effectiveness an integrative optimization solution, highlighting automated tailored sports.

Language: Английский

Language models for data extraction and risk of bias assessment in complementary medicine DOI Creative Commons
Honghao Lai, Jiayi Liu,

Chunyang Bai

et al.

npj Digital Medicine, Journal Year: 2025, Volume and Issue: 8(1)

Published: Jan. 31, 2025

Large language models (LLMs) have the potential to enhance evidence synthesis efficiency and accuracy. This study assessed LLM-only LLM-assisted methods in data extraction risk of bias assessment for 107 trials on complementary medicine. Moonshot-v1-128k Claude-3.5-sonnet achieved high accuracy (≥95%), with performing better (≥97%). significantly reduced processing time (14.7 5.9 min vs. 86.9 10.4 conventional methods). These findings highlight LLMs' when integrated human expertise.

Language: Английский

Citations

1

Effects of Incorporating a Large Language Model-Based Adaptive Mechanism Into Contextual Games on Students’ Academic Performance, Flow Experience, Cognitive Load and Behavioral Patterns DOI
Minkai Wang, Di Zhang, Jingdong Zhu

et al.

Journal of Educational Computing Research, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 17, 2025

Scientific knowledge is often abstract and challenging, making it difficult for students to apply these concepts effectively. Digital game-based learning (DGBL) offers an engaging immersive approach, but the fixed resources predetermined paths in most games limit its ability adapt individual learners’ needs. Large language models, as advanced conversational agents, are capable of personalized interaction by adapting users' styles, interests, preferences. This study explores a large model-based adaptive contextual game (LLM-ACG) approach aimed at transforming scientific education into engaging, interactive, supportive environments. Additionally, this research examines impacts LLM-ACG on academic performance, flow experiences, cognitive load, behavioral patterns among students. A quasi-experimental design was employed compare differences achievements experiences between conventional (C-CG) fifth-grade Furthermore, in-depth analysis student during gameplay conducted through lagged sequence analysis. The findings indicate that demonstrates clear advantage over C-CG terms enhancing students' experiences. It effectively reduces load significantly promotes positive behaviors sustained motivation

Language: Английский

Citations

1

Research on Sentiment Analysis of Online Public Opinion Based on RoBERTa–BiLSTM–Attention Model DOI Creative Commons
Jiangao Deng, Yue Liu

Applied Sciences, Journal Year: 2025, Volume and Issue: 15(4), P. 2148 - 2148

Published: Feb. 18, 2025

Public opinion comments are important for the public to express their emotions and demands. Accordingly, identifying contained in taking corresponding countermeasures according changes of great theoretical practical significance online management. This study took a event at college as an example. Firstly, microblogs comment data related were crawled with Python coding, pre-processing operations such cleaning, word splitting, de-noising carried out; then, stage was divided into phases based on daily sound volume, Baidu index, key time points event. Secondly, sentiment analysis, supplementary dictionary constructed SO-PMI algorithm merged commonly used pre-annotate corpus; RoBERTa–BiLSTM–Attention model classify microblog comments; after that, four evaluation indexes selected ablation experiments set up verify performance model. Finally, results classification, we drew trends evolution graphs analysis. The showed that significantly improved pre-labelling accuracy. achieved 91.56%, 90.87%, 91.07%, 91.17% accuracy, precision, recall, F1-score, respectively. situation notification, expert response, regulatory dynamics, secondary will trigger significant fluctuations volume sentiment.

Language: Английский

Citations

1

A comprehensive survey on integrating large language models with knowledge-based methods DOI Creative Commons
Wenli Yang,

Lilian Some,

Michael Bain

et al.

Knowledge-Based Systems, Journal Year: 2025, Volume and Issue: unknown, P. 113503 - 113503

Published: April 1, 2025

Language: Английский

Citations

1

Evaluating AI Proficiency in Nuclear Cardiology: Large Language Models take on the Board Preparation Exam DOI

Valerie Builoff,

Aakash Shanbhag, Robert J.H. Miller

et al.

Journal of Nuclear Cardiology, Journal Year: 2024, Volume and Issue: unknown, P. 102089 - 102089

Published: Nov. 1, 2024

Language: Английский

Citations

5

Enhancing Annotated Bibliography Generation with LLM Ensembles DOI Creative Commons
Sergio Bermejo

Published: Jan. 20, 2025

This work proposes a novel approach to enhancing annotated bibliography generation through Large Language Model (LLM) ensembles. In particular, multiple LLMs in different roles—controllable text generation, evaluation, and summarization—are introduced validated using systematic methodology enhance model performance scholarly tasks. Output diversity among the ensemble that generates is obtained LLM parameters, followed by an acting as judge assess relevance, accuracy, coherence. Responses selected several combining strategies are then merged refined summarization redundancy removal techniques. The preliminary experimental validation demonstrates combined outputs from improve coherence relevance compared individual responses, leading 38% improvement annotation quality 51% reduction content redundancy, thus highlighting potential for automating complex tasks while maintaining high-quality standards.

Language: Английский

Citations

0

Antibiotics and Artificial Intelligence: Clinical Considerations on a Rapidly Evolving Landscape DOI Creative Commons
Daniele Roberto Giacobbe, Sabrina Guastavino, Cristina Marelli

et al.

Infectious Diseases and Therapy, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 15, 2025

The growing interest in leveraging artificial intelligence (AI) tools for healthcare decision-making extends to improving antibiotic prescribing. Large language models (LLMs), a type of AI trained on extensive datasets from diverse sources, can process and generate contextually relevant text. While their potential enhance patient outcomes is significant, implementing LLM-based support prescribing complex. Here, we specifically expand the discussion this crucial topic by introducing three interconnected perspectives: (1) distinctive commonalities, but also conceptual differences, between use LLMs as assistants scientific writing supporting real-world practice; (2) possibility nuances expertise paradox; (3) peculiarities risk error when considering complex tasks such

Language: Английский

Citations

0

Sentiment Analysis with Large Language Models Applied to the Federal Reserve Beige Book DOI

Tom Espel

Published: Jan. 1, 2025

Language: Английский

Citations

0

Dehumanizing the human, humanizing the machine: organic consciousness as a hallmark of the persistence of the human against the backdrop of artificial intelligence DOI
Sergio Torres–Martínez

AI & Society, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 28, 2025

Language: Английский

Citations

0

Large Language Models for Mining Biobank-Derived Insights into Health and Disease DOI Creative Commons
Manuel Corpas,

Alfredo Iacoangeli

Research Square (Research Square), Journal Year: 2025, Volume and Issue: unknown

Published: March 10, 2025

Abstract Large Language Models (LLMs) offer transformative potential for analysing biobank-derived datasets, facilitating knowledge extraction, patient stratification, and predictive modelling. This study benchmarks multiple LLMs in retrieving biomedical insights from a leading biobank, the UK Biobank. Biobank-related literature is used as gold standard assessing coverage retrieval of some best known LLMs, including GPT, Claude, Gemini, Mistral, Llama DeekSeek. The findings highlight each model’s strengths limitations, emphasising challenges data heterogeneity accessibility. We suggest future research should take advantage power enhanced precision biobank extraction.

Language: Английский

Citations

0