🚀 clip-finetuned-csu-p14-336-e3l57-l
This model is a fine - tuned version of [openai/clip - vit - large - patch14 - 336](https://huggingface.co/openai/clip - vit - large - patch14 - 336) on an unknown dataset, achieving a loss of 0.4700 on the evaluation set.
🚀 Quick Start
This fine - tuned model clip - finetuned - csu - p14 - 336 - e3l57 - l
is based on [openai/clip - vit - large - patch14 - 336](https://huggingface.co/openai/clip - vit - large - patch14 - 336). It has been trained on an unknown dataset and shows a loss of 0.4700 on the evaluation set.
📚 Documentation
Model description
This model is a fine - tuned version of [openai/clip - vit - large - patch14 - 336](https://huggingface.co/openai/clip - vit - large - patch14 - 336) on an unknown dataset.
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
Property |
Details |
learning_rate |
5e - 07 |
train_batch_size |
128 |
eval_batch_size |
8 |
seed |
42 |
optimizer |
Adam with betas=(0.9,0.999) and epsilon = 1e - 08 |
lr_scheduler_type |
linear |
num_epochs |
3.0 |
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
0.3812 |
0.0533 |
500 |
1.1163 |
0.2683 |
0.1067 |
1000 |
0.9684 |
0.2119 |
0.1600 |
1500 |
0.9100 |
0.1889 |
0.2133 |
2000 |
0.8620 |
0.2071 |
0.2666 |
2500 |
0.7918 |
0.1588 |
0.3200 |
3000 |
0.7657 |
0.1718 |
0.3733 |
3500 |
0.7610 |
0.1113 |
0.4266 |
4000 |
0.7458 |
0.1313 |
0.4799 |
4500 |
0.7168 |
0.1649 |
0.5333 |
5000 |
0.7019 |
0.1245 |
0.5866 |
5500 |
0.6812 |
0.1286 |
0.6399 |
6000 |
0.6502 |
0.1076 |
0.6933 |
6500 |
0.6154 |
0.1477 |
0.7466 |
7000 |
0.6118 |
0.1315 |
0.7999 |
7500 |
0.6016 |
0.1413 |
0.8532 |
8000 |
0.5849 |
0.124 |
0.9066 |
8500 |
0.5766 |
0.1215 |
0.9599 |
9000 |
0.5559 |
0.131 |
1.0132 |
9500 |
0.5633 |
0.0348 |
1.0666 |
10000 |
0.5531 |
0.0687 |
1.1199 |
10500 |
0.5495 |
0.0749 |
1.1732 |
11000 |
0.5474 |
0.1083 |
1.2265 |
11500 |
0.5416 |
0.0485 |
1.2799 |
12000 |
0.5412 |
0.0449 |
1.3332 |
12500 |
0.5511 |
0.0847 |
1.3865 |
13000 |
0.5492 |
0.0702 |
1.4398 |
13500 |
0.5509 |
0.0487 |
1.4932 |
14000 |
0.5447 |
0.072 |
1.5465 |
14500 |
0.5490 |
0.0325 |
1.5998 |
15000 |
0.5443 |
0.0894 |
1.6532 |
15500 |
0.5264 |
0.0503 |
1.7065 |
16000 |
0.5209 |
0.0568 |
1.7598 |
16500 |
0.5083 |
0.0589 |
1.8131 |
17000 |
0.5093 |
0.0892 |
1.8665 |
17500 |
0.4983 |
0.0584 |
1.9198 |
18000 |
0.4886 |
0.063 |
1.9731 |
18500 |
0.4945 |
0.0493 |
2.0265 |
19000 |
0.4956 |
0.0246 |
2.0798 |
19500 |
0.4871 |
0.0385 |
2.1331 |
20000 |
0.4830 |
0.0574 |
2.1864 |
20500 |
0.4849 |
0.039 |
2.2398 |
21000 |
0.4872 |
0.0653 |
2.2931 |
21500 |
0.4838 |
0.0325 |
2.3464 |
22000 |
0.4876 |
0.0578 |
2.3997 |
22500 |
0.4870 |
0.039 |
2.4531 |
23000 |
0.4805 |
0.0536 |
2.5064 |
23500 |
0.4824 |
0.0382 |
2.5597 |
24000 |
0.4809 |
0.0479 |
2.6131 |
24500 |
0.4749 |
0.0268 |
2.6664 |
25000 |
0.4723 |
0.0406 |
2.7197 |
25500 |
0.4743 |
0.0349 |
2.7730 |
26000 |
0.4718 |
0.017 |
2.8264 |
26500 |
0.4721 |
0.0286 |
2.8797 |
27000 |
0.4709 |
0.0265 |
2.9330 |
27500 |
0.4708 |
0.0552 |
2.9863 |
28000 |
0.4700 |
Framework versions
Property |
Details |
Transformers |
4.45.0.dev0 |
Pytorch |
1.12.1 |
Datasets |
2.21.0 |
Tokenizers |
0.19.1 |