File size: 4,392 Bytes
c54043b 9e16948 c54043b 9e16948 36cef0e 9e16948 36cef0e 9e16948 36cef0e 9e16948 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
---
license: mit
language:
- en
metrics:
- f1
tags:
- medical
---
# Introduction
MentaLLaMA-33B-lora is part of the [MentaLLaMA](https://github.com/SteveKGYang/MentalLLaMA) project, the first open-source large language model (LLM) series for
interpretable mental health analysis with instruction-following capability. This model is finetuned based on the Vicuna-33B foundation model and the full IMHI instruction tuning data,
but tuned with LoRA due to limited computational resources.
The model is expected to make complex mental health analyses for various mental health conditions and give reliable explanations for each of its predictions.
It is fine-tuned on the IMHI dataset with 75K high-quality natural language instructions to boost its performance in downstream tasks.
We perform a comprehensive evaluation on the IMHI benchmark with 20K test samples. The result shows that MentalLLaMA approaches state-of-the-art discriminative
methods in correctness and generates high-quality explanations.
# Ethical Consideration
Although experiments on MentaLLaMA show promising performance on interpretable mental health analysis, we stress that
all predicted results and generated explanations should only used
for non-clinical research, and the help-seeker should get assistance
from professional psychiatrists or clinical practitioners. In addition,
recent studies have indicated LLMs may introduce some potential
bias, such as gender gaps. Meanwhile, some incorrect prediction results, inappropriate explanations, and over-generalization
also illustrate the potential risks of current LLMs. Therefore, there
are still many challenges in applying the model to real-scenario
mental health monitoring systems.
## Other Models in MentaLLaMA
In addition to MentaLLaMA-33B-lora, the MentaLLaMA project includes another model: MentaLLaMA-chat-7B, MentalBART, MentalT5.
- **MentaLLaMA-chat-13B**: This model is finetuned based on the Meta LLaMA2-chat-13B foundation model and the full IMHI instruction tuning data. The training data covers
10 mental health analysis tasks.
- **MentaLLaMA-chat-7B**: This model is finetuned based on the Meta LLaMA2-chat-7B foundation model and the full IMHI instruction tuning data. The training data covers
10 mental health analysis tasks.
- **MentalBART**: This model is finetuned based on the BART-large foundation model and the full IMHI-completion data. The training data covers
10 mental health analysis tasks. This model doesn't have instruction-following ability but is more lightweight and performs well in interpretable mental health analysis
in a completion-based manner.
- **MentalT5**: This model is finetuned based on the T5-large foundation model and the full IMHI-completion data. The training data covers
10 mental health analysis tasks. This model doesn't have instruction-following ability but is more lightweight and performs well in interpretable mental health analysis
in a completion-based manner.
## Usage
You can use the MentaLLaMA-33B-lora model in your Python project with the Hugging Face Transformers library. Here is a simple example of how to load the model:
Since our model is based on the Vicuna-33B foundation model, you need to first download the Vicuna-33B model [here](https://huggingface.co/lmsys/vicuna-33b-v1.3),
and put it under the `./vicuna-33B` dir. Then download the MentaLLaMA-33B-lora weights and put it under the `./MentaLLaMA-33B-lora` dir.
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
peft_model = AutoPeftModelForCausalLM.from_pretrained("./MentaLLaMA-33B-lora")
tokenizer = AutoTokenizer.from_pretrained('./MentaLLaMA-33B-lora')
```
In this example, AutoPeftModelForCausalLM can automatically load the base model and the lora weights from the downloaded dir, and AutoTokenizer can load the tokenizer.
## License
MentaLLaMA-33B-lora is licensed under MIT. For more details, please see the MIT file.
## Citation
If you use MentaLLaMA-33B-lora in your work, please cite our paper:
```bibtex
@misc{yang2023mentalllama,
title={MentalLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models},
author={Kailai Yang and Tianlin Zhang and Ziyan Kuang and Qianqian Xie and Sophia Ananiadou},
year={2023},
eprint={2309.13567},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |