đ robert-base-emotion
This model is based on RoBERTa and fine - tuned on the emotion dataset. It can effectively perform text classification tasks, especially for emotion classification, providing high - accuracy results.
đ Quick Start
To use the robert - base - emotion
model, you can follow the example below:
from transformers import pipeline
classifier = pipeline("text - classification", model='bhadresh - savani/roberta - base - emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.002281982684507966},
{'label': 'joy', 'score': 0.9726489186286926},
{'label': 'love', 'score': 0.021365027874708176},
{'label': 'anger', 'score': 0.0026395076420158148},
{'label': 'fear', 'score': 0.0007162453257478774},
{'label': 'surprise', 'score': 0.0003483477921690792}
]]
"""
⨠Features
- Based on RoBERTa: roberta is an improved version of BERT with better hyperparameter choices, robustly optimized during pretraining.
- Fine - tuned on Emotion Dataset: The model is fine - tuned on the emotion dataset using HuggingFace Trainer with specific hyperparameters, enabling it to perform well in emotion classification tasks.
đĻ Installation
The original README does not provide specific installation steps, so this section is skipped.
đģ Usage Examples
Basic Usage
from transformers import pipeline
classifier = pipeline("text - classification", model='bhadresh - savani/roberta - base - emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.002281982684507966},
{'label': 'joy', 'score': 0.9726489186286926},
{'label': 'love', 'score': 0.021365027874708176},
{'label': 'anger', 'score': 0.0026395076420158148},
{'label': 'fear', 'score': 0.0007162453257478774},
{'label': 'surprise', 'score': 0.0003483477921690792}
]]
"""
Advanced Usage
There is no advanced usage example in the original README, so this part is not provided.
đ Documentation
Model description
roberta is Bert with better hyperparameter choices, so it's said to be Robustly optimized Bert during pretraining. [roberta - base](https://huggingface.co/roberta - base) is finetuned on the emotion dataset using HuggingFace Trainer with the following hyperparameters:
learning rate 2e - 5,
batch size 64,
num_train_epochs = 8,
Model Performance Comparision on Emotion Dataset from Twitter
Model |
Accuracy |
F1 Score |
Test Sample per Second |
[Distilbert - base - uncased - emotion](https://huggingface.co/bhadresh - savani/distilbert - base - uncased - emotion) |
93.8 |
93.79 |
398.69 |
[Bert - base - uncased - emotion](https://huggingface.co/bhadresh - savani/bert - base - uncased - emotion) |
94.05 |
94.06 |
190.152 |
[Roberta - base - emotion](https://huggingface.co/bhadresh - savani/roberta - base - emotion) |
93.95 |
93.97 |
195.639 |
[Albert - base - v2 - emotion](https://huggingface.co/bhadresh - savani/albert - base - v2 - emotion) |
93.6 |
93.65 |
182.794 |
Dataset
The model is trained on the Twitter - Sentiment - Analysis dataset.
Training procedure
You can follow the Colab Notebook and change the model name to roberta for training.
Eval results
{
'test_accuracy': 0.9395,
'test_f1': 0.9397328860104454,
'test_loss': 0.14367154240608215,
'test_runtime': 10.2229,
'test_samples_per_second': 195.639,
'test_steps_per_second': 3.13
}
Reference
- [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural - language - processing/9781098103231/)
đ§ Technical Details
The original README does not provide detailed technical implementation details, so this section is skipped.
đ License
The model is licensed under the Apache - 2.0 license.