đ videomae-base-finetuned-kinetics-0409_final_5sec_org_ab7_val_inside_train
This model is a fine - tuned version of [MCG - NJU/videomae - base - finetuned - kinetics](https://huggingface.co/MCG - NJU/videomae - base - finetuned - kinetics) on an unknown dataset. It offers high - performance video analysis capabilities with remarkable accuracy on the evaluation set.
đ Quick Start
This model has achieved the following results on the evaluation set:
- Loss: 0.3255
- Accuracy: 0.9138
đ Documentation
Model Information
Property |
Details |
Library Name |
transformers |
Model Type |
videomae - base - finetuned - kinetics - 0409_final_5sec_org_ab7_val_inside_train |
Base Model |
MCG - NJU/videomae - base - finetuned - kinetics |
Tags |
generated_from_trainer |
Metrics |
accuracy |
License |
cc - by - nc - 4.0 |
Training Procedure
Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e - 05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon = 1e - 08 and optimizer_args = No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 67100
Training Results
Training Loss |
Epoch |
Step |
Validation Loss |
Accuracy |
0.4404 |
0.0100 |
672 |
0.2641 |
0.8958 |
0.0506 |
1.0100 |
1344 |
0.2971 |
0.9045 |
0.0077 |
2.0100 |
2016 |
0.8203 |
0.8293 |
0.016 |
3.0100 |
2688 |
0.4447 |
0.8958 |
0.0012 |
4.0100 |
3360 |
0.5228 |
0.8622 |
0.0003 |
5.0100 |
4032 |
0.5333 |
0.8731 |
0.0019 |
6.0100 |
4704 |
0.5615 |
0.8786 |
0.0669 |
7.0100 |
5376 |
0.3206 |
0.9162 |
0.0056 |
8.0100 |
6048 |
0.5627 |
0.8849 |
0.0003 |
9.0100 |
6720 |
0.6655 |
0.8567 |
0.0063 |
10.0100 |
7392 |
0.6566 |
0.8786 |
0.0003 |
11.0100 |
8064 |
0.5058 |
0.8778 |
0.0005 |
12.0100 |
8736 |
0.4329 |
0.9045 |
0.0005 |
13.0100 |
9408 |
0.4837 |
0.8943 |
0.0182 |
14.0100 |
10080 |
0.6702 |
0.8692 |
0.0001 |
15.0100 |
10752 |
0.7277 |
0.8583 |
0.0001 |
16.0100 |
11424 |
0.6110 |
0.8763 |
0.0001 |
17.0100 |
12096 |
0.5027 |
0.9146 |
0.0006 |
18.0100 |
12768 |
0.4604 |
0.9068 |
0.0138 |
19.0100 |
13440 |
0.4703 |
0.9123 |
0.0 |
20.0100 |
14112 |
0.4712 |
0.9068 |
0.5385 |
21.0100 |
14784 |
0.5793 |
0.9021 |
0.0001 |
22.0100 |
15456 |
0.4995 |
0.9162 |
Framework Versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
đ License
This model is released under the cc - by - nc - 4.0 license.