🚀 Mental Health Assessment Large Language Model
This large language model is designed to evaluate the severity of mental health issues by analyzing text or speech inputs from users (such as speakers, writers, and patients). The training dataset comprises diagnoses made by psychiatrists based on the text or speech of patients with various degrees of mental health problems.
The model has multiple applications. It can help doctors diagnose mental health conditions in patients or specific individuals, enable self - diagnosis for those wanting to understand their own mental health, or analyze the psychological characteristics of fictional characters.
In the test dataset (30477 rows), the model's performance is as follows: 'accuracy': 0.78, 'f1': 0.77.
This model is part of my project on fine - tuning open - source LLMs to predict various human cognitive abilities (e.g., personality, attitude, mental status, etc.).
🚀 Quick Start
Test Examples
The following test examples can be used in the API bar:
- "I was okay just a moment ago. I will learn how to be okay again."
- "There were days when she was unhappy; she did not know why, when it did not seem worthwhile to be glad or sorry, to be alive or dead; when life appeared to her like a grotesque pandemonium and humanity like worms struggling blindly toward inevitable annihilation".
- "I hope to one day see a sea of people all wearing silver ribbons as a sign that they understand the secret battle and as a celebration of the victories made each day as we individually pull ourselves up out of our foxholes to see our scars heal and to remember what the sun looks like."
Output Explanation
The output assigns a label with values from 0 to 5 to classify the severity of mental health issues. A label of 0 indicates minimal severity, suggesting few or no symptoms of mental health problems. On the contrary, a label of 5 represents maximal severity, indicating serious mental health conditions that may require immediate and comprehensive intervention. A larger value means that the situation is likely to be more serious. Take care!
Code for Testing New Text
Please run the following code to test a new text:
import torch
from transformers import BertTokenizer, BertForSequenceClassification, AutoConfig
model_path = "KevSun/mentalhealth_LM"
config = AutoConfig.from_pretrained(model_path, num_labels=6, problem_type="single_label_classification")
tokenizer = BertTokenizer.from_pretrained(model_path, use_fast=True)
model = BertForSequenceClassification.from_pretrained(model_path, config=config, ignore_mismatched_sizes=True)
def predict_text(text, model, tokenizer):
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probabilities = torch.softmax(logits, dim=-1)
max_probability, predicted_class_index = torch.max(probabilities, dim=-1)
return predicted_class_index.item(), max_probability.item(), probabilities.numpy()
text = "I was okay just a moment ago. I will learn how to be okay again."
predicted_class, max_prob, probs = predict_text(text, model, tokenizer)
print(f"Predicted class: {predicted_class}, Probability: {max_prob:.4f}")
📄 License
This project is licensed under the Apache 2.0 license.