Energy and Buildings, Journal Year: 2024, Volume and Issue: unknown, P. 115247 - 115247
Published: Dec. 1, 2024
Language: Английский
Energy and Buildings, Journal Year: 2024, Volume and Issue: unknown, P. 115247 - 115247
Published: Dec. 1, 2024
Language: Английский
Revista Española de Enfermedades Digestivas, Journal Year: 2025, Volume and Issue: unknown
Published: Jan. 1, 2025
This letter discusses the shift from large proprietary AI models to smaller, specialized language in healthcare. With advancements fine-tuning techniques, such can be adapted using affordable resources, ensuring data security and empowering smaller institutions. The emphasizes importance of guiding development complement human medical expertise.
Language: Английский
Citations
1Applied Sciences, Journal Year: 2025, Volume and Issue: 15(4), P. 2146 - 2146
Published: Feb. 18, 2025
Ontology engineering (OE) plays a critical role in modeling and managing structured knowledge across various domains. This study examines the performance of fine-tuned large language models (LLMs), specifically GPT-4 Mistral 7B, efficiently automating OE tasks. Foundational textbooks are used as basis for dataset creation feeding LLMs. The methodology involved segmenting texts into manageable chapters, generating question–answer pairs, translating visual elements description logic to curate datasets JSONL format. research aims enhance models’ abilities generate domain-specific ontologies, with hypotheses asserting that LLMs would outperform base models, significantly improve their performance. Comparative experiments revealed demonstrated superior accuracy adherence ontology syntax, albeit higher computational costs. Conversely, 7B excelled speed cost efficiency but struggled tasks, often outputs lacked syntactical precision relevance. presented results highlight necessity integrating contextual understanding practical utility specialized applications, such Search Rescue (SAR) missions wildfire incidents. Both despite limitations, exhibited potential principles. However, underscored importance aligning training data emulate human expertise effectively. study, based on extending our previous work topic, concludes targeted OE, offering insights improving future applications. findings advocate further exploration hybrid solutions balance efficiency.
Language: Английский
Citations
0Clinical and Translational Science, Journal Year: 2025, Volume and Issue: 18(3)
Published: March 1, 2025
Despite interest in clinical trials with decentralized elements (DCTs), analysis of their trends trial registries is lacking due to heterogeneous designs and unstandardized terms. We explored Llama 3, an open-source large language model, efficiently evaluate these trends. Trial data were sourced from Aggregate Analysis ClinicalTrials.gov, focusing on drug conducted between 2018 2023. utilized three 3 models a different number parameters: 8b (model 1), fine-tuned 2) curated data, 70b 3). Prompt engineering enabled sophisticated tasks such as classification DCTs explanations extracting elements. Model performance, evaluated 3-month exploratory test dataset, demonstrated that sensitivity could be improved after fine-tuning 0.0357 0.5385. Low positive predictive value the model 2 by DCT-associated expressions 0.5385 0.9167. However, extraction was only properly performed which had larger parameters. Based results, we screened entire 6-year dataset applying expressions. After subsequent application identified 692 DCTs. found total 213 classified phase 2, followed 162 4 trials, 112 92 1 trials. In conclusion, our study potential for analyzing information not structured machine-readable format. Managing biases during crucial.
Language: Английский
Citations
0BioMedInformatics, Journal Year: 2025, Volume and Issue: 5(1), P. 15 - 15
Published: March 11, 2025
Large language models (LLMs) have emerged as powerful tools for (semi-)automating the initial screening of abstracts in systematic reviews, offering potential to significantly reduce manual burden on research teams. This paper provides a broad overview prompt engineering principles and highlights how traditional PICO (Population, Intervention, Comparison, Outcome) criteria can be converted into actionable instructions LLMs. We analyze trade-offs between “soft” prompts, which maximize recall by accepting articles unless they explicitly fail an inclusion requirement, “strict” demand explicit evidence every criterion. Using periodontics case study, we illustrate design affects recall, precision, overall efficiency discuss metrics (accuracy, F1 score) evaluate performance. also examine common pitfalls, such overly lengthy prompts or ambiguous instructions, underscore continuing need expert oversight mitigate hallucinations biases inherent LLM outputs. Finally, explore emerging trends, including multi-stage pipelines fine-tuning, while noting ethical considerations related data privacy transparency. By applying rigorous evaluation, researchers optimize LLM-based processes, allowing faster more comprehensive synthesis across biomedical disciplines.
Language: Английский
Citations
0Revista Española de Enfermedades Digestivas, Journal Year: 2025, Volume and Issue: unknown
Published: Jan. 1, 2025
There are potential advantages of domain-specific solutions, built by fine-tuning pretrained LLMs with healthcare data, but this approach has certain drawbacks: privacy, biases and accuracy.
Language: Английский
Citations
0Published: March 16, 2025
Language: Английский
Citations
0Published: April 3, 2025
Language: Английский
Citations
0Automation in Construction, Journal Year: 2025, Volume and Issue: 175, P. 106206 - 106206
Published: April 21, 2025
Language: Английский
Citations
0Energy and Buildings, Journal Year: 2024, Volume and Issue: unknown, P. 115247 - 115247
Published: Dec. 1, 2024
Language: Английский
Citations
1