Recommender systems and reinforcement learning for human-building interaction and context aware support: A text mining-driven review of scientific literature DOI
Wenhao Zhang, Matías Quintana, Clayton Miller

et al.

Energy and Buildings, Journal Year: 2024, Volume and Issue: unknown, P. 115247 - 115247

Published: Dec. 1, 2024

Language: Английский

The Future of AI in Healthcare: Smaller, More Specialized Language Models DOI Creative Commons
María Santos, P. Álvarez

Revista Española de Enfermedades Digestivas, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 1, 2025

This letter discusses the shift from large proprietary AI models to smaller, specialized language in healthcare. With advancements fine-tuning techniques, such can be adapted using affordable resources, ensuring data security and empowering smaller institutions. The emphasizes importance of guiding development complement human medical expertise.

Language: Английский

Citations

1

Fine-Tuning Large Language Models for Ontology Engineering: A Comparative Analysis of GPT-4 and Mistral DOI Creative Commons

Dimitrios Doumanas,

Andreas Soularidis, Dimitris Spiliotopoulos

et al.

Applied Sciences, Journal Year: 2025, Volume and Issue: 15(4), P. 2146 - 2146

Published: Feb. 18, 2025

Ontology engineering (OE) plays a critical role in modeling and managing structured knowledge across various domains. This study examines the performance of fine-tuned large language models (LLMs), specifically GPT-4 Mistral 7B, efficiently automating OE tasks. Foundational textbooks are used as basis for dataset creation feeding LLMs. The methodology involved segmenting texts into manageable chapters, generating question–answer pairs, translating visual elements description logic to curate datasets JSONL format. research aims enhance models’ abilities generate domain-specific ontologies, with hypotheses asserting that LLMs would outperform base models, significantly improve their performance. Comparative experiments revealed demonstrated superior accuracy adherence ontology syntax, albeit higher computational costs. Conversely, 7B excelled speed cost efficiency but struggled tasks, often outputs lacked syntactical precision relevance. presented results highlight necessity integrating contextual understanding practical utility specialized applications, such Search Rescue (SAR) missions wildfire incidents. Both despite limitations, exhibited potential principles. However, underscored importance aligning training data emulate human expertise effectively. study, based on extending our previous work topic, concludes targeted OE, offering insights improving future applications. findings advocate further exploration hybrid solutions balance efficiency.

Language: Английский

Citations

0

Exploration of Using an Open‐Source Large Language Model for Analyzing Trial Information: A Case Study of Clinical Trials With Decentralized Elements DOI Creative Commons
Ki Young Huh, Ildae Song, Yoonjin Kim

et al.

Clinical and Translational Science, Journal Year: 2025, Volume and Issue: 18(3)

Published: March 1, 2025

Despite interest in clinical trials with decentralized elements (DCTs), analysis of their trends trial registries is lacking due to heterogeneous designs and unstandardized terms. We explored Llama 3, an open-source large language model, efficiently evaluate these trends. Trial data were sourced from Aggregate Analysis ClinicalTrials.gov, focusing on drug conducted between 2018 2023. utilized three 3 models a different number parameters: 8b (model 1), fine-tuned 2) curated data, 70b 3). Prompt engineering enabled sophisticated tasks such as classification DCTs explanations extracting elements. Model performance, evaluated 3-month exploratory test dataset, demonstrated that sensitivity could be improved after fine-tuning 0.0357 0.5385. Low positive predictive value the model 2 by DCT-associated expressions 0.5385 0.9167. However, extraction was only properly performed which had larger parameters. Based results, we screened entire 6-year dataset applying expressions. After subsequent application identified 692 DCTs. found total 213 classified phase 2, followed 162 4 trials, 112 92 1 trials. In conclusion, our study potential for analyzing information not structured machine-readable format. Managing biases during crucial.

Language: Английский

Citations

0

How to Write Effective Prompts for Screening Biomedical Literature Using Large Language Models DOI Creative Commons
Maria Teresa Colangelo, Stefano Guizzardi,

Marco Meleti

et al.

BioMedInformatics, Journal Year: 2025, Volume and Issue: 5(1), P. 15 - 15

Published: March 11, 2025

Large language models (LLMs) have emerged as powerful tools for (semi-)automating the initial screening of abstracts in systematic reviews, offering potential to significantly reduce manual burden on research teams. This paper provides a broad overview prompt engineering principles and highlights how traditional PICO (Population, Intervention, Comparison, Outcome) criteria can be converted into actionable instructions LLMs. We analyze trade-offs between “soft” prompts, which maximize recall by accepting articles unless they explicitly fail an inclusion requirement, “strict” demand explicit evidence every criterion. Using periodontics case study, we illustrate design affects recall, precision, overall efficiency discuss metrics (accuracy, F1 score) evaluate performance. also examine common pitfalls, such overly lengthy prompts or ambiguous instructions, underscore continuing need expert oversight mitigate hallucinations biases inherent LLM outputs. Finally, explore emerging trends, including multi-stage pipelines fine-tuning, while noting ethical considerations related data privacy transparency. By applying rigorous evaluation, researchers optimize LLM-based processes, allowing faster more comprehensive synthesis across biomedical disciplines.

Language: Английский

Citations

0

Domain-specific language models: innovation with inherent risks DOI
Julio Mayol,

Marcos Gámez Alastuey,

Ramiro Fernandez

et al.

Revista Española de Enfermedades Digestivas, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 1, 2025

There are potential advantages of domain-specific solutions, built by fine-tuning pretrained LLMs with healthcare data, but this approach has certain drawbacks: privacy, biases and accuracy.

Language: Английский

Citations

0

Utilizing Large Language Models to Detect and Decipher Abbreviation in Clinical Notes (Preprint) DOI
Hamed Jafarpour, Cheligeer Cheligeer, Jun Yan

et al.

Published: March 16, 2025

BACKGROUND Preprocessing serves to standardize clinical documentation and create consistency throughout all notes. An essential aspect of this process is text normalization, which mainly entails recognizing abbreviations translating them into their full forms. A range methods, including Natural Language Processing (NLP) Machine Learning (ML) algorithms, are employed identify interpret found in Recently, fine-tuning Large Models (LLMs) applying prompt engineering techniques have been utilized carry out various tasks involving Each method presents its own set benefits drawbacks. OBJECTIVE To utilize LLMs detect decipher notes effectively efficiently. METHODS framework proposed for the detection deciphering within text, using LLMs. This consists four phases. First, task sub-tasks identified. Second, relevant properties designated domain clearly defined. Third, these generate optimized examples. The fourth stage involves application LLMs, can serve either as prompts enhanced by examples or be creation a labeled dataset fine tuning. Finally, two case studies conducted; first input ChatGPT utilizing GPT-4 language model, while second experiment both GPT-2 T5 models previously established dataset. RESULTS findings emphasize distinct advantage example-based prompts, whether selected randomly through selection, comparison basic prompts. results indicate that consistently yield lower accuracy. Furthermore, selection 5-shots demonstrate significantly superior performance relative those In contrast approach, result indicates models, fine-tuned with framework, exhibit when compared prompt. CONCLUSIONS research underscores potential proficient interpretation capability particularly evident preparation resulting reduced costs performance. Further investigation necessary develop an LLM model specifically designed

Language: Английский

Citations

0

From Barriers to Tactics: Development Study of Behavioral Science-Informed Agentic Workflow for Personalized Nutrition Coaching (Preprint) DOI
Eric Yang, Tomás García,

Hannah Williams

et al.

Published: April 3, 2025

BACKGROUND Effective management of cardiometabolic conditions requires sustained positive nutrition habits, often hindered by complex and individualized barriers. Direct human is simply not scalable, while deterministic automated approaches to coaching may lack the personalization needed address these diverse challenges. OBJECTIVE We report development validation a novel large language model (LLM)-powered agentic workflow designed provide personalized directly identifying mitigating patient-specific METHODS used behavioral science principles create comprehensive that can map nutrition-related barriers corresponding evidence-based strategies. First, specialized LLM agent intentionally probes for identifies root causes patient’s dietary struggles. Subsequently, separate delivers tailored tactics overcome those specific conducted user study with individuals (N=16) inform our design then validated approach through an additional (n=6). also large-scale simulation study, grounding on real patient vignettes expert-validated metrics, where experts evaluated system’s performance across multiple scenarios domains. RESULTS In system accurately identified provided guidance. Five out 6 participants agreed helped them recognize obstacles preventing from being healthier, all strongly advice felt their situation. primary in more than 90% cases. Additionally, determined delivered actionable empathetically, average ratings 4.17-4.79 5-point Likert scale. CONCLUSIONS Our findings demonstrate potential this LLM-powered improve providing personalized, behaviorally-informed interventions. CLINICALTRIAL NA

Language: Английский

Citations

0

Integrating domain-specific knowledge and fine-tuned general-purpose large language models for question-answering in construction engineering management DOI
Shenghua Zhou, Xiaoyang Liu, Dezhi Li

et al.

Automation in Construction, Journal Year: 2025, Volume and Issue: 175, P. 106206 - 106206

Published: April 21, 2025

Language: Английский

Citations

0

Recommender systems and reinforcement learning for human-building interaction and context aware support: A text mining-driven review of scientific literature DOI
Wenhao Zhang, Matías Quintana, Clayton Miller

et al.

Energy and Buildings, Journal Year: 2024, Volume and Issue: unknown, P. 115247 - 115247

Published: Dec. 1, 2024

Language: Английский

Citations

1