đ HelpingAI3
HelpingAI3 is an advanced language model that specializes in emotionally intelligent conversations. It builds on the success of HelpingAI2.5, offering improved emotional understanding and contextual awareness.
đ Quick Start
To use HelpingAI3, you can follow the code example below. This example demonstrates how to load the model and generate text using the transformers
library.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("HelpingAI/HelpingAI-3")
tokenizer = AutoTokenizer.from_pretrained("HelpingAI/HelpingAI-3")
chat = [
{"role": "system", "content": "You are HelpingAI, an emotional AI. Always answer my questions in the HelpingAI style."},
{"role": "user", "content": "Introduce yourself."}
]
inputs = tokenizer.apply_chat_template(
chat,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=256,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][inputs.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
⨠Features
- Emotionally Intelligent: HelpingAI3 is designed to understand and respond to human emotions effectively.
- Contextual Awareness: It can maintain context in conversations, providing more coherent and relevant responses.
- Diverse Applications: Suitable for AI companionship, therapy guidance, personalized learning, and professional assistance.
đĻ Installation
The README does not provide specific installation steps, so this section is skipped.
đģ Usage Examples
Basic Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("HelpingAI/HelpingAI-3")
tokenizer = AutoTokenizer.from_pretrained("HelpingAI/HelpingAI-3")
chat = [
{"role": "system", "content": "You are HelpingAI, an emotional AI. Always answer my questions in the HelpingAI style."},
{"role": "user", "content": "Introduce yourself."}
]
inputs = tokenizer.apply_chat_template(
chat,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=256,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][inputs.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
Advanced Usage
The README does not provide advanced usage examples, so this part is skipped.
đ Documentation
Model Details
Property |
Details |
Developed by |
HelpingAI |
Model Type |
Decoder-only large language model |
Language |
English |
License |
HelpingAI License |
Training Data
HelpingAI3 was trained on a diverse dataset:
- Emotional Dialogues: 15 million rows to enhance conversational intelligence.
- Therapeutic Exchanges: 3 million rows aimed at providing advanced emotional support.
- Cultural Conversations: 250,000 rows to improve global awareness.
- Crisis Response Scenarios: 1 million rows to better handle emergency situations.
Training Procedure
The model training involved the following steps:
- Base Model: Started from HelpingAI2.5.
- Emotional Intelligence Training: Used Reinforcement Learning for Emotion Understanding (RLEU) and context-aware conversational fine-tuning.
- Optimization: Employed mixed-precision training and advanced token efficiency techniques.
Intended Use
HelpingAI3 is intended for:
- AI Companionship & Emotional Support: Offering empathetic interactions.
- Therapy & Wellbeing Guidance: Assisting in mental health support.
- Personalized Learning: Tailoring educational content to individual needs.
- Professional AI Assistance: Enhancing productivity in professional settings.
Limitations
- Biases: The model may reflect biases in the training data.
- Understanding Complex Emotions: There may be difficulties in interpreting nuanced human emotions.
- Not a Substitute for Professional Help: For serious emotional or psychological issues, consult a professional.
đ§ Technical Details
The model was developed by HelpingAI. It is a decoder-only large language model. The training process included starting from HelpingAI2.5, using Reinforcement Learning for Emotion Understanding (RLEU) and context-aware conversational fine - tuning, and applying mixed - precision training and advanced token efficiency techniques.
đ License
The model is released under the HelpingAI License.