🚀 roberta-base-legal-multi-downstream-indian-ner
This model is a fine - tuned version of [MHGanainy/roberta - base - legal - multi](https://huggingface.co/MHGanainy/roberta - base - legal - multi) on an unknown dataset, achieving high performance on the evaluation set.
🚀 Quick Start
This model is a fine - tuned version of [MHGanainy/roberta - base - legal - multi](https://huggingface.co/MHGanainy/roberta - base - legal - multi) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2526
- Precision: 0.6406
- Recall: 0.8244
- F1: 0.7210
- Accuracy: 0.9663
📚 Documentation
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e - 05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon = 1e - 08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
Precision |
Recall |
F1 |
Accuracy |
No log |
1.0 |
172 |
0.2310 |
0.1233 |
0.4904 |
0.1971 |
0.8307 |
No log |
2.0 |
344 |
0.1929 |
0.1983 |
0.5393 |
0.2900 |
0.8765 |
0.4324 |
3.0 |
516 |
0.1667 |
0.1773 |
0.4897 |
0.2604 |
0.8738 |
0.4324 |
4.0 |
688 |
0.1836 |
0.2957 |
0.6059 |
0.3975 |
0.9081 |
0.4324 |
5.0 |
860 |
0.2005 |
0.2855 |
0.5623 |
0.3787 |
0.9137 |
0.1106 |
6.0 |
1032 |
0.2003 |
0.3858 |
0.6974 |
0.4968 |
0.9323 |
0.1106 |
7.0 |
1204 |
0.2224 |
0.4182 |
0.6719 |
0.5155 |
0.9428 |
0.1106 |
8.0 |
1376 |
0.2221 |
0.3347 |
0.6147 |
0.4334 |
0.9312 |
0.0589 |
9.0 |
1548 |
0.1960 |
0.4067 |
0.7026 |
0.5152 |
0.9404 |
0.0589 |
10.0 |
1720 |
0.1904 |
0.5049 |
0.7410 |
0.6006 |
0.9524 |
0.0589 |
11.0 |
1892 |
0.2274 |
0.5337 |
0.7707 |
0.6307 |
0.9565 |
0.0359 |
12.0 |
2064 |
0.2471 |
0.5525 |
0.7696 |
0.6432 |
0.9575 |
0.0359 |
13.0 |
2236 |
0.2352 |
0.5649 |
0.7675 |
0.6508 |
0.9591 |
0.0359 |
14.0 |
2408 |
0.2297 |
0.5530 |
0.7661 |
0.6424 |
0.9586 |
0.0224 |
15.0 |
2580 |
0.2349 |
0.5702 |
0.7923 |
0.6632 |
0.9597 |
0.0224 |
16.0 |
2752 |
0.2465 |
0.6033 |
0.8052 |
0.6898 |
0.9624 |
0.0224 |
17.0 |
2924 |
0.2428 |
0.6100 |
0.8098 |
0.6959 |
0.9647 |
0.0143 |
18.0 |
3096 |
0.2543 |
0.6238 |
0.8154 |
0.7068 |
0.9646 |
0.0143 |
19.0 |
3268 |
0.2526 |
0.6305 |
0.8161 |
0.7114 |
0.9651 |
0.0143 |
20.0 |
3440 |
0.2526 |
0.6406 |
0.8244 |
0.7210 |
0.9663 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1