đ Cybonto-distilbert-base-uncased-finetuned-ner-Wnut17
This model is a fine - tuned version of [distilbert - base - uncased](https://huggingface.co/distilbert - base - uncased) on the wnut_17 dataset. It can be used for token classification tasks and achieves good performance on the evaluation set.
đ Documentation
Model Information
Property |
Details |
Model Type |
Cybonto - distilbert - base - uncased - finetuned - ner - Wnut17 |
Training Data |
wnut_17 |
Evaluation Results
This model achieves the following results on the evaluation set:
- Loss: 0.5062
- Precision: 0.6603
- Recall: 0.4682
- F1: 0.5479
- Accuracy: 0.9355
Model Index
- Name: Cybonto - distilbert - base - uncased - finetuned - ner - Wnut17
Results:
- Task:
Name: Token Classification
Type: token - classification
Dataset:
Name: wnut_17
Type: wnut_17
Args: wnut_17
Metrics:
- Name: Precision
Type: precision
Value: 0.6603139013452914
- Name: Recall
Type: recall
Value: 0.4682034976152623
- Name: F1
Type: f1
Value: 0.547906976744186
- Name: Accuracy
Type: accuracy
Value: 0.9355430668654662
đ§ Technical Details
Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e - 05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9, 0.999) and epsilon = 1e - 08
- lr_scheduler_type: linear
- num_epochs: 30
Training Results
Training Loss |
Epoch |
Step |
Validation Loss |
Precision |
Recall |
F1 |
Accuracy |
No log |
1.0 |
107 |
0.3396 |
0.6470 |
0.4269 |
0.5144 |
0.9330 |
No log |
2.0 |
214 |
0.3475 |
0.5948 |
0.4539 |
0.5149 |
0.9335 |
No log |
3.0 |
321 |
0.3793 |
0.6613 |
0.4253 |
0.5177 |
0.9332 |
No log |
4.0 |
428 |
0.3598 |
0.6195 |
0.4944 |
0.5500 |
0.9354 |
0.0409 |
5.0 |
535 |
0.3702 |
0.5802 |
0.4571 |
0.5113 |
0.9308 |
0.0409 |
6.0 |
642 |
0.4192 |
0.6546 |
0.4459 |
0.5305 |
0.9344 |
0.0409 |
7.0 |
749 |
0.4039 |
0.6360 |
0.4610 |
0.5346 |
0.9354 |
0.0409 |
8.0 |
856 |
0.4104 |
0.6564 |
0.4587 |
0.5400 |
0.9353 |
0.0409 |
9.0 |
963 |
0.3839 |
0.6283 |
0.4944 |
0.5534 |
0.9361 |
0.0132 |
10.0 |
1070 |
0.4331 |
0.6197 |
0.4547 |
0.5245 |
0.9339 |
0.0132 |
11.0 |
1177 |
0.4152 |
0.6196 |
0.4817 |
0.5420 |
0.9355 |
0.0132 |
12.0 |
1284 |
0.4654 |
0.6923 |
0.4507 |
0.5460 |
0.9353 |
0.0132 |
13.0 |
1391 |
0.4869 |
0.6739 |
0.4436 |
0.5350 |
0.9350 |
0.0132 |
14.0 |
1498 |
0.4297 |
0.6424 |
0.4769 |
0.5474 |
0.9353 |
0.0061 |
15.0 |
1605 |
0.4507 |
0.6272 |
0.4626 |
0.5325 |
0.9340 |
0.0061 |
16.0 |
1712 |
0.4410 |
0.6066 |
0.4793 |
0.5355 |
0.9335 |
0.0061 |
17.0 |
1819 |
0.4851 |
0.6639 |
0.4523 |
0.5381 |
0.9351 |
0.0061 |
18.0 |
1926 |
0.4815 |
0.6553 |
0.4563 |
0.5380 |
0.9346 |
0.0035 |
19.0 |
2033 |
0.5188 |
0.6780 |
0.4420 |
0.5351 |
0.9350 |
0.0035 |
20.0 |
2140 |
0.4986 |
0.6770 |
0.4698 |
0.5547 |
0.9363 |
0.0035 |
21.0 |
2247 |
0.4834 |
0.6552 |
0.4714 |
0.5483 |
0.9355 |
0.0035 |
22.0 |
2354 |
0.5094 |
0.6784 |
0.4595 |
0.5479 |
0.9358 |
0.0035 |
23.0 |
2461 |
0.4954 |
0.6583 |
0.4579 |
0.5401 |
0.9354 |
0.0026 |
24.0 |
2568 |
0.5035 |
0.6667 |
0.4595 |
0.5440 |
0.9354 |
0.0026 |
25.0 |
2675 |
0.5000 |
0.6599 |
0.4658 |
0.5461 |
0.9355 |
0.0026 |
26.0 |
2782 |
0.4968 |
0.6697 |
0.4738 |
0.5549 |
0.9357 |
0.0026 |
27.0 |
2889 |
0.4991 |
0.6545 |
0.4714 |
0.5481 |
0.9352 |
0.0026 |
28.0 |
2996 |
0.4936 |
0.6508 |
0.4769 |
0.5505 |
0.9353 |
0.0021 |
29.0 |
3103 |
0.5005 |
0.6535 |
0.4722 |
0.5482 |
0.9353 |
0.0021 |
30.0 |
3210 |
0.5062 |
0.6603 |
0.4682 |
0.5479 |
0.9355 |
Framework Versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
đ License
This model is licensed under the Apache - 2.0 license.