đ bert-base-uncased-emotion
This model is a fine - tuned BERT model on the emotion dataset, which can be used for text classification tasks such as emotion recognition.
đ Quick Start
Prerequisites
Make sure you have installed the transformers
library. If not, you can install it using the following command:
pip install transformers
Usage Example
from transformers import pipeline
classifier = pipeline("text - classification", model='bhadresh - savani/bert - base - uncased - emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
output:
[[
{'label': 'sadness', 'score': 0.0005138228880241513},
{'label': 'joy', 'score': 0.9972520470619202},
{'label': 'love', 'score': 0.0007443308713845909},
{'label': 'anger', 'score': 0.0007404946954920888},
{'label': 'fear', 'score': 0.00032938539516180754},
{'label': 'surprise', 'score': 0.0004197491507511586}
]]
"""
⨠Features
- Based on BERT Architecture: Bert is a Transformer Bidirectional Encoder based Architecture trained on MLM (Mask Language Modeling) objective.
- Fine - tuned on Emotion Dataset: [bert - base - uncased](https://huggingface.co/bert - base - uncased) is finetuned on the emotion dataset using HuggingFace Trainer with the following training parameters:
learning rate 2e - 5,
batch size 64,
num_train_epochs = 8,
đ Documentation
Model Performance Comparision on Emotion Dataset from Twitter
Model |
Accuracy |
F1 Score |
Test Sample per Second |
[Distilbert - base - uncased - emotion](https://huggingface.co/bhadresh - savani/distilbert - base - uncased - emotion) |
93.8 |
93.79 |
398.69 |
[Bert - base - uncased - emotion](https://huggingface.co/bhadresh - savani/bert - base - uncased - emotion) |
94.05 |
94.06 |
190.152 |
[Roberta - base - emotion](https://huggingface.co/bhadresh - savani/roberta - base - emotion) |
93.95 |
93.97 |
195.639 |
[Albert - base - v2 - emotion](https://huggingface.co/bhadresh - savani/albert - base - v2 - emotion) |
93.6 |
93.65 |
182.794 |
Dataset
The model is trained on the Twitter - Sentiment - Analysis dataset.
Training procedure
You can refer to the Colab Notebook and change the model name from distilbert
to bert
for training.
Eval results
{
'test_accuracy': 0.9405,
'test_f1': 0.9405920712282673,
'test_loss': 0.15769127011299133,
'test_runtime': 10.5179,
'test_samples_per_second': 190.152,
'test_steps_per_second': 3.042
}
Reference
- [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural - language - processing/9781098103231/)
đ License
This model is released under the Apache - 2.0 license.