đ wav2vec_cv
This model is a fine - tuned version of [facebook/wav2vec2 - base - 960h](https://huggingface.co/facebook/wav2vec2 - base - 960h) on the None dataset, achieving a loss of 4.1760 and a WER of 1.0 on the evaluation set.
đ Quick Start
This model is a fine - tuned version of [facebook/wav2vec2 - base - 960h](https://huggingface.co/facebook/wav2vec2 - base - 960h) on the None dataset.
It achieves the following results on the evaluation set:
đ Documentation
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
đ§ Technical Details
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon = 1e - 08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 60
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
Wer |
7.1467 |
4.29 |
30 |
4.2173 |
1.0 |
6.8918 |
8.57 |
60 |
4.2004 |
1.0 |
5.4913 |
12.86 |
90 |
4.2007 |
1.0 |
5.3906 |
17.14 |
120 |
4.1765 |
1.0 |
4.9212 |
21.43 |
150 |
4.1714 |
1.0 |
4.3916 |
25.71 |
180 |
4.1811 |
1.0 |
5.2255 |
30.0 |
210 |
4.1633 |
1.0 |
4.501 |
34.29 |
240 |
4.2050 |
1.0 |
4.4328 |
38.57 |
270 |
4.1572 |
1.0 |
4.2136 |
42.86 |
300 |
4.1698 |
1.0 |
4.3353 |
47.14 |
330 |
4.1721 |
1.0 |
4.1805 |
51.43 |
360 |
4.1804 |
1.0 |
4.1695 |
55.71 |
390 |
4.1801 |
1.0 |
4.2978 |
60.0 |
420 |
4.1760 |
1.0 |
Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
đ License
This project is licensed under the Apache - 2.0 license.