đ wac2vec-lllfantomlll
This model is a fine - tuned version of [facebook/wav2vec2 - base](https://huggingface.co/facebook/wav2vec2 - base), aiming to provide better performance in relevant speech - related tasks.
đ Quick Start
This model is a fine - tuned version of [facebook/wav2vec2 - base](https://huggingface.co/facebook/wav2vec2 - base) on the None dataset.
It achieves the following results on the evaluation set:
đ Documentation
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon = 1e - 08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
Wer |
3.5768 |
1.0 |
500 |
2.0283 |
1.0238 |
0.9219 |
2.01 |
1000 |
0.5103 |
0.5022 |
0.4497 |
3.01 |
1500 |
0.4746 |
0.4669 |
0.3163 |
4.02 |
2000 |
0.4144 |
0.4229 |
0.2374 |
5.02 |
2500 |
0.4186 |
0.4161 |
0.2033 |
6.02 |
3000 |
0.4115 |
0.3975 |
0.1603 |
7.03 |
3500 |
0.4424 |
0.3817 |
0.1455 |
8.03 |
4000 |
0.4151 |
0.3918 |
0.1276 |
9.04 |
4500 |
0.4940 |
0.3798 |
0.108 |
10.04 |
5000 |
0.4580 |
0.3688 |
0.1053 |
11.04 |
5500 |
0.4243 |
0.3700 |
0.0929 |
12.05 |
6000 |
0.4999 |
0.3727 |
0.0896 |
13.05 |
6500 |
0.4991 |
0.3624 |
0.0748 |
14.06 |
7000 |
0.4924 |
0.3602 |
0.0681 |
15.06 |
7500 |
0.4908 |
0.3544 |
0.0619 |
16.06 |
8000 |
0.5021 |
0.3559 |
0.0569 |
17.07 |
8500 |
0.5448 |
0.3518 |
0.0549 |
18.07 |
9000 |
0.4919 |
0.3508 |
0.0478 |
19.08 |
9500 |
0.4704 |
0.3513 |
0.0437 |
20.08 |
10000 |
0.5058 |
0.3555 |
0.0421 |
21.08 |
10500 |
0.5127 |
0.3489 |
0.0362 |
22.09 |
11000 |
0.5439 |
0.3527 |
0.0322 |
23.09 |
11500 |
0.5418 |
0.3469 |
0.0327 |
24.1 |
12000 |
0.5298 |
0.3422 |
0.0292 |
25.1 |
12500 |
0.5511 |
0.3426 |
0.0246 |
26.1 |
13000 |
0.5349 |
0.3472 |
0.0251 |
27.11 |
13500 |
0.5646 |
0.3391 |
0.0214 |
28.11 |
14000 |
0.5821 |
0.3424 |
0.0217 |
29.12 |
14500 |
0.5560 |
0.3417 |
Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
đ License
This model is licensed under the Apache - 2.0 license.