đ twitter-roberta-base-WNUT
This model is a fine - tuned version of [cardiffnlp/twitter - roberta - base](https://huggingface.co/cardiffnlp/twitter - roberta - base) on the wnut_17 dataset. It can effectively perform token classification tasks, achieving high precision, recall, F1 score, and accuracy on the evaluation set.
đĻ Metadata
Property |
Details |
Tags |
generated_from_trainer |
Datasets |
wnut_17 |
Metrics |
precision, recall, f1, accuracy |
đ Model Index
- Name: twitter - roberta - base - WNUT
- Results:
- Task:
- Name: Token Classification
- Type: token - classification
- Dataset:
- Name: wnut_17
- Type: wnut_17
- Args: wnut_17
- Metrics:
- Name: Precision, Type: precision, Value: 0.7045454545454546
- Name: Recall, Type: recall, Value: 0.6303827751196173
- Name: F1, Type: f1, Value: 0.6654040404040403
- Name: Accuracy, Type: accuracy, Value: 0.9639611008707811
đ Evaluation Results
This model achieves the following results on the evaluation set:
- Loss: 0.1938
- Precision: 0.7045
- Recall: 0.6304
- F1: 0.6654
- Accuracy: 0.9640
đ§ Training Procedure
đ Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e - 05
- train_batch_size: 64
- eval_batch_size: 1024
- seed: 42
- optimizer: Adam with betas=(0.9, 0.999) and epsilon = 1e - 08
- lr_scheduler_type: linear
- num_epochs: 10
đ Training Results
Training Loss |
Epoch |
Step |
Validation Loss |
Precision |
Recall |
F1 |
Accuracy |
No log |
0.46 |
25 |
0.3912 |
0.0 |
0.0 |
0.0 |
0.9205 |
No log |
0.93 |
50 |
0.2847 |
0.25 |
0.0024 |
0.0047 |
0.9209 |
No log |
1.39 |
75 |
0.2449 |
0.5451 |
0.3469 |
0.4240 |
0.9426 |
No log |
1.85 |
100 |
0.1946 |
0.6517 |
0.4856 |
0.5565 |
0.9492 |
No log |
2.31 |
125 |
0.1851 |
0.6921 |
0.5646 |
0.6219 |
0.9581 |
No log |
2.78 |
150 |
0.1672 |
0.6867 |
0.5873 |
0.6331 |
0.9594 |
No log |
3.24 |
175 |
0.1675 |
0.6787 |
0.5837 |
0.6277 |
0.9615 |
No log |
3.7 |
200 |
0.1644 |
0.6765 |
0.6328 |
0.6539 |
0.9638 |
No log |
4.17 |
225 |
0.1672 |
0.6997 |
0.6495 |
0.6737 |
0.9640 |
No log |
4.63 |
250 |
0.1652 |
0.6915 |
0.6435 |
0.6667 |
0.9649 |
No log |
5.09 |
275 |
0.1882 |
0.7067 |
0.6053 |
0.6521 |
0.9629 |
No log |
5.56 |
300 |
0.1783 |
0.7128 |
0.6352 |
0.6717 |
0.9645 |
No log |
6.02 |
325 |
0.1813 |
0.7011 |
0.6172 |
0.6565 |
0.9639 |
No log |
6.48 |
350 |
0.1804 |
0.7139 |
0.6447 |
0.6776 |
0.9647 |
No log |
6.94 |
375 |
0.1902 |
0.7218 |
0.6268 |
0.6709 |
0.9641 |
No log |
7.41 |
400 |
0.1883 |
0.7106 |
0.6316 |
0.6688 |
0.9641 |
No log |
7.87 |
425 |
0.1862 |
0.7067 |
0.6340 |
0.6683 |
0.9643 |
No log |
8.33 |
450 |
0.1882 |
0.7053 |
0.6328 |
0.6671 |
0.9639 |
No log |
8.8 |
475 |
0.1919 |
0.7055 |
0.6304 |
0.6658 |
0.9638 |
0.1175 |
9.26 |
500 |
0.1938 |
0.7045 |
0.6304 |
0.6654 |
0.9640 |
0.1175 |
9.72 |
525 |
0.1880 |
0.7025 |
0.6411 |
0.6704 |
0.9646 |
đ Framework Versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1