đ CPU Compatible Mental Health Chatbot Model
This repository hosts a fine - tuned LLaMA - based model tailored for mental health counseling conversations. It offers empathetic and meaningful responses to mental - health related questions and is accessible on CPUs and low - RAM systems.
đ Quick Start
This mental health chatbot model is designed to provide support for mental - health related queries. It's easy to set up and use, even on systems with limited resources.
⨠Features
- Fine - tuned on Mental Health Counseling Conversations: The model is trained with a dataset specifically curated for mental health support.
- Low Resource Requirements: Can run on systems with 15 GB RAM and a CPU, no GPU needed.
- Pretrained on Meta's LLaMA 3.2 1B Model: Leverages the LLaMA architecture for high - quality responses.
- Supports LoRA (Low - Rank Adaptation): Allows for efficient fine - tuning with low computational overhead.
đĻ Installation
- Clone the repository:
git clone https://huggingface.co/<your_hf_username>/mental - health - chatbot - model
cd mental - health - chatbot - model
- Install the required packages:
pip install torch transformers datasets huggingface - hub
đģ Usage Examples
Basic Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "<your_hf_username>/mental - health - chatbot - model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "I feel anxious and don't know what to do."
inputs = tokenizer(input_text, return_tensors="pt")
response = model.generate(**inputs, max_length=256, pad_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(response[0], skip_special_tokens=True))
Advanced Usage
This model can be run on:
- CPU - only systems
- Machines with as little as 15 GB RAM
đ Documentation
Fine - Tuning Instructions
To further fine - tune the model on your dataset:
- Prepare your dataset in Hugging Face Dataset format.
- Use the following script:
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./fine_tuned_model",
per_device_train_batch_size=4,
num_train_epochs=3,
evaluation_strategy="epoch",
save_steps=500,
logging_dir="./logs",
learning_rate=5e - 5,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=validation_dataset,
)
trainer.train()
Model Details
Property |
Details |
Base Model |
[meta - llama/Llama - 3.2 - 1B - Instruct](https://huggingface.co/meta - llama/Llama - 3.2 - 1B - Instruct) |
Training Data |
Amod/Mental Health Counseling Conversations |
Fine - Tuning Framework |
Hugging Face Transformers |
Model Performance
Property |
Details |
Training Epochs |
3 |
Batch Size |
4 |
Learning Rate |
5e - 5 |
Evaluation Strategy |
Epoch - wise |
đ License
This project is licensed under the Apache 2.0 License.
đ Acknowledgments
- [Meta](https://huggingface.co/meta - llama) for the LLaMA model
- Hugging Face for their open - source tools and datasets
- The creators of the Mental Health Counseling Conversations dataset