đ MentaLLaMA-chat-7B
MentaLLaMA-chat-7B is the first open-source large language model for interpretable mental health analysis with instruction-following capability, aiming to provide reliable explanations for mental health analysis.
đ Quick Start
MentaLLaMA-chat-7B is part of the MentaLLaMA project. It is a fine - tuned model based on the Meta LLaMA2 - chat-7B foundation model and the full IMHI instruction tuning data. This model is expected to conduct complex mental health analysis for various mental health conditions and offer reliable explanations for its predictions. It is fine - tuned on the IMHI dataset with 75K high - quality natural language instructions to enhance its performance in downstream tasks. A comprehensive evaluation on the IMHI benchmark with 20K test samples shows that MentalLLaMA approaches state - of - the - art discriminative methods in correctness and generates high - quality explanations.
⨠Features
Ethical Consideration
Although experiments on MentaLLaMA show promising performance on interpretable mental health analysis, we emphasize that all predicted results and generated explanations should only be used for non - clinical research. Help - seekers should seek assistance from professional psychiatrists or clinical practitioners. Recent studies have indicated that LLMs may introduce potential biases, such as gender gaps. Incorrect prediction results, inappropriate explanations, and over - generalization also illustrate the potential risks of current LLMs. Therefore, there are still many challenges in applying the model to real - scenario mental health monitoring systems.
Other Models in MentaLLaMA
In addition to MentaLLaMA - chat-7B, the MentaLLaMA project includes other models: MentaLLaMA - chat-13B, MentalBART, and MentalT5.
- MentaLLaMA - chat-13B: Finetuned based on the Meta LLaMA2 - chat-13B foundation model and the full IMHI instruction tuning data. The training data covers 10 mental health analysis tasks.
- MentalBART: Finetuned based on the BART - large foundation model and the full IMHI - completion data. The training data covers 10 mental health analysis tasks. This model doesn't have instruction - following ability but is more lightweight and performs well in interpretable mental health analysis in a completion - based manner.
- MentalT5: Finetuned based on the T5 - large foundation model and the full IMHI - completion data. The training data covers 10 mental health analysis tasks. This model doesn't have instruction - following ability but is more lightweight and performs well in interpretable mental health analysis in a completion - based manner.
đģ Usage Examples
Basic Usage
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained('klyang/MentaLLaMA-chat-7B')
model = LlamaForCausalLM.from_pretrained('klyang/MentaLLaMA-chat-7B', device_map='auto')
In this example, LlamaTokenizer
is used to load the tokenizer, and LlamaForCausalLM
is used to load the model. The device_map='auto'
argument is used to automatically use the GPU if it's available.
đ License
MentaLLaMA - chat-7B is licensed under MIT. For more details, please see the MIT file.
đ Documentation
Citation
If you use MentaLLaMA - chat-7B in your work, please cite our paper:
@misc{yang2023mentalllama,
title={MentalLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models},
author={Kailai Yang and Tianlin Zhang and Ziyan Kuang and Qianqian Xie and Sophia Ananiadou},
year={2023},
eprint={2309.13567},
archivePrefix={arXiv},
primaryClass={cs.CL}
}