MentaLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models DOI
Kailai Yang, Tianlin Zhang, Ziyan Kuang

и другие.

Proceedings of the ACM Web Conference 2022, Год журнала: 2024, Номер unknown, С. 4489 - 4500

Опубликована: Май 8, 2024

As an integral part of people's daily lives, social media is becoming a rich source for automatic mental health analysis.As traditional discriminative methods bear poor generalization ability and low interpretability, the recent large language models (LLMs) have been explored interpretable analysis on media, which aims to provide detailed explanations along with predictions in zero-shot or few-shot settings.The results show that LLMs still achieve unsatisfactory classification performance zero-shot/few-shot manner, further significantly affects quality generated explanations.Domain-specific finetuning effective solution, but faces two critical challenges: 1) lack high-quality training data.2) no open-source foundation LLMs.To alleviate these problems, we formally model as text generation task, build first multi-task multi-source instruction (IMHI) dataset 105K data samples support LLM tuning evaluation.The raw are collected from 10 existing sources covering 8 tasks.We prompt ChatGPT expert-designed prompts obtain explanations.To ensure reliability explanations, perform strict human evaluations correctness, consistency, data.Based IMHI LLaMA2 models, train MentaLLaMA, instruction-following series media.We evaluate Men-taLLaMA other advanced benchmark, holistic evaluation benchmark analysis.The MentaLLaMA approaches state-of-the-art correctness generates human-level explanations.MentaLLaMA also strong generalizability unseen tasks.The project available at https://github.com/SteveKGYang/MentaLLaMA.

Язык: Английский

MentaLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models DOI
Kailai Yang, Tianlin Zhang, Ziyan Kuang

и другие.

Proceedings of the ACM Web Conference 2022, Год журнала: 2024, Номер unknown, С. 4489 - 4500

Опубликована: Май 8, 2024

As an integral part of people's daily lives, social media is becoming a rich source for automatic mental health analysis.As traditional discriminative methods bear poor generalization ability and low interpretability, the recent large language models (LLMs) have been explored interpretable analysis on media, which aims to provide detailed explanations along with predictions in zero-shot or few-shot settings.The results show that LLMs still achieve unsatisfactory classification performance zero-shot/few-shot manner, further significantly affects quality generated explanations.Domain-specific finetuning effective solution, but faces two critical challenges: 1) lack high-quality training data.2) no open-source foundation LLMs.To alleviate these problems, we formally model as text generation task, build first multi-task multi-source instruction (IMHI) dataset 105K data samples support LLM tuning evaluation.The raw are collected from 10 existing sources covering 8 tasks.We prompt ChatGPT expert-designed prompts obtain explanations.To ensure reliability explanations, perform strict human evaluations correctness, consistency, data.Based IMHI LLaMA2 models, train MentaLLaMA, instruction-following series media.We evaluate Men-taLLaMA other advanced benchmark, holistic evaluation benchmark analysis.The MentaLLaMA approaches state-of-the-art correctness generates human-level explanations.MentaLLaMA also strong generalizability unseen tasks.The project available at https://github.com/SteveKGYang/MentaLLaMA.

Язык: Английский

Процитировано

27