đ Fast Emotion-X: Fine-tuned DeBERTa V3 Small Based Emotion Detection
Fast Emotion-X is a state-of-the-art emotion detection model fine-tuned from Microsoft's DeBERTa V3 Small model, designed to accurately classify text into one of six emotional categories.
đ Quick Start
Fast Emotion-X is a state - of - the - art emotion detection model fine - tuned from Microsoft's DeBERTa V3 Small model. It is designed to accurately classify text into one of six emotional categories. Leveraging the robust capabilities of DeBERTa, this model is fine - tuned on a comprehensive emotion dataset, ensuring high accuracy and reliability.
⨠Features
- Accurate Classification: Capable of accurately classifying text into six emotional categories: anger, disgust, fear, joy, sadness, and surprise.
- Multiple Usage Modes: Can be used for single text classification, batch processing, visualization, CLI usage, DataFrame integration, emotion trend analysis, and fine - tuning on custom datasets.
đĻ Installation
Install the package using pip:
pip install emotionclassifier
đģ Usage Examples
Basic Usage
Here's an example of how to use the emotionclassifier
to classify a single text:
from emotionclassifier import EmotionClassifier
classifier = EmotionClassifier()
text = "I am very happy today!"
result = classifier.predict(text)
print("Emotion:", result['label'])
print("Confidence:", result['confidence'])
Batch Processing
You can classify multiple texts at once using the predict_batch
method:
texts = ["I am very happy today!", "I am so sad."]
results = classifier.predict_batch(texts)
print("Batch processing results:", results)
Visualization
To visualize the emotion distribution of a text:
from emotionclassifier import plot_emotion_distribution
result = classifier.predict("I am very happy today!")
plot_emotion_distribution(result['probabilities'], classifier.labels.values())
Command - Line Interface (CLI) Usage
You can also use the package from the command line:
emotionclassifier --model deberta-v3-small --text "I am very happy today!"
DataFrame Integration
Integrate with pandas DataFrames to classify text columns:
import pandas as pd
from emotionclassifier import DataFrameEmotionClassifier
df = pd.DataFrame({
'text': ["I am very happy today!", "I am so sad."]
})
classifier = DataFrameEmotionClassifier()
df = classifier.classify_dataframe(df, 'text')
print(df)
Emotion Trends Over Time
Analyze and plot emotion trends over time:
from emotionclassifier import EmotionTrends
texts = ["I am very happy today!", "I am feeling okay.", "I am very sad."]
trends = EmotionTrends()
emotions = trends.analyze_trends(texts)
trends.plot_trends(emotions)
Fine - tuning
Fine - tune a pre - trained model on your own dataset:
from emotionclassifier.fine_tune import fine_tune_model
train_dataset = ...
val_dataset = ...
fine_tune_model(classifier.model, classifier.tokenizer, train_dataset, val_dataset, output_dir='fine_tuned_model')
Using transformers Library
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "AnkitAI/deberta-v3-small-base-emotions-classifier"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
def predict_emotion(text):
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
outputs = model(**inputs)
logits = outputs.logits
predictions = logits.argmax(dim=1)
return predictions
text = "I'm so happy with the results!"
emotion = predict_emotion(text)
print("Detected Emotion:", emotion)
đ Documentation
Model Details
- Model Name:
AnkitAI/deberta-v3-small-base-emotions-classifier
- Base Model:
microsoft/deberta-v3-small
- Dataset: dair-ai/emotion
- Fine - tuning: The model is fine - tuned for emotion detection with a classification head for six emotional categories: anger, disgust, fear, joy, sadness, and surprise.
Emotion Labels
- Anger
- Disgust
- Fear
- Joy
- Sadness
- Surprise
Training
The model was trained using the following parameters:
- Learning Rate: 2e - 5
- Batch Size: 4
- Weight Decay: 0.01
- Evaluation Strategy: Epoch
Training Details
- Evaluation Loss: 0.0858
- Evaluation Runtime: 110070.6349 seconds
- Evaluation Samples/Second: 78.495
- Evaluation Steps/Second: 2.453
- Training Loss: 0.1049
- Evaluation Accuracy: 94.6%
- Evaluation Precision: 94.8%
- Evaluation Recall: 94.5%
- Evaluation F1 Score: 94.7%
Model Card Data
Parameter |
Value |
Model Name |
microsoft/deberta-v3-small |
Training Dataset |
dair-ai/emotion |
Number of Training Epochs |
20 |
Learning Rate |
2e-5 |
Per Device Train Batch Size |
4 |
Evaluation Strategy |
Epoch |
Best Model Accuracy |
94.6% |
đ License
This model is licensed under the MIT License.