Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 17 - 29
Опубликована: Дек. 12, 2024
Язык: Английский
Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 17 - 29
Опубликована: Дек. 12, 2024
Язык: Английский
Information, Год журнала: 2025, Номер 16(2), С. 90 - 90
Опубликована: Янв. 24, 2025
Classic health-related quality of life (HRQOL) metrics are cumbersome, time-intensive, and subject to biases based on the patient’s native language, educational level, cultural values. Natural language processing (NLP) converts text into quantitative metrics. Sentiment analysis enables matter experts construct domain-specific lexicons that assign a value either negative (−1) or positive (1) certain words. The growth telehealth provides opportunities apply sentiment transcripts adult spinal deformity patients’ visits derive novel less biased HRQOL metric. In this study, we demonstrate feasibility constructing spine-specific lexicon for an metric patients from their preoperative visit transcripts. We asked each twenty-five (25) seven open-ended questions about conditions, treatment, during visits. analyzed Pearson correlation between our established (the Scoliosis Research Society-22 questionnaire [SRS-22], 36-Item Short Form Health Survey [SF-36], Oswestry Disability Index [ODI]). results show statistically significant correlations (0.43–0.74) conventional This evidence applying NLP techniques patient can yield effective
Язык: Английский
Процитировано
1medRxiv (Cold Spring Harbor Laboratory), Год журнала: 2025, Номер unknown
Опубликована: Фев. 10, 2025
The effectiveness of public health intervention, such as vaccination and social distancing, relies on support adherence. Social media has emerged a critical platform for understanding fostering engagement with interventions. However, the lack real-time surveillance issues leveraging data, particularly during emergencies, leads to delayed responses suboptimal policy adjustments. To address this gap, we developed PH-LLM (Public Health Large Language Models Infoveillance)-a novel suite large language models (LLMs) specifically designed monitoring. We curated multilingual training corpus comprising 593,100 instruction-output pairs from 36 datasets, covering 96 infoveillance tasks 6 question-answering datasets based data. was trained using quantized low-rank adapters (QLoRA) LoRA plus, Qwen 2.5, which supports 29 languages. includes six different sizes: 0.5B, 1.5B, 3B, 7B, 14B, 32B. evaluate PH-LLM, constructed benchmark 19 English 20 10 (totaling 52,158 unseen pairs). compared PH-LLM's performance against leading open-source models, including Llama-3.1-70B-Instruct, Mistral-Large-Instruct-2407, Qwen2.5-72B-Instruct, well proprietary GPT-4o. Across evaluation tasks, consistently outperformed baseline similar larger sizes, instruction-tuned versions Qwen2.5, Llama3.1/3.2, Mistral, bloomz, PH-LLM-32B achieving state-of-the-art results. Notably, PH-LLM-14B surpassed GPT-4o in both (>=56.0% vs. <= 52.3%) (>=59.6% 59.1%). only exception PH-LLM-7B, slightly average (48.7%) Qwen2.5-7B-Instruct (50.7%), although it mini (46.9%), Mistral-Small-Instruct-2409 (45.8%), Llama-3.1-8B-Instruct (45.4%), bloomz-7b1-mt (27.9%). represents significant advancement infoveillance, offering capabilities cost-effective solutions monitoring sentiment issues. By equipping global, national, local agencies timely insights potential enhance rapid response strategies, improve policy-making, strengthen communication crises beyond. This study is supported part by NIH grants R01LM013337 (YL).
Язык: Английский
Процитировано
0Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 17 - 29
Опубликована: Дек. 12, 2024
Язык: Английский
Процитировано
0