đ vit_Liveness_detection_v1.0
This is a fine - tuned model based on the Vision Transformer architecture, designed for liveness detection tasks.

đ Quick Start
This model is a fine - tuned version of [google/vit - base - patch16 - 224](https://huggingface.co/google/vit - base - patch16 - 224) on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0047
- Accuracy: 0.9988
- F1: 0.9988
- Recall: 0.9988
- Precision: 0.9988
đ Documentation
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
đ§ Technical Details
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e - 05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon = 1e - 08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
Accuracy |
F1 |
Recall |
Precision |
0.0254 |
0.2048 |
128 |
0.0148 |
0.9946 |
0.9946 |
0.9946 |
0.9946 |
0.0256 |
0.4096 |
256 |
0.0180 |
0.9945 |
0.9944 |
0.9945 |
0.9945 |
0.0113 |
0.6144 |
384 |
0.0133 |
0.9955 |
0.9955 |
0.9955 |
0.9955 |
0.0116 |
0.8192 |
512 |
0.0070 |
0.9976 |
0.9976 |
0.9976 |
0.9976 |
0.0084 |
1.024 |
640 |
0.0072 |
0.9976 |
0.9976 |
0.9976 |
0.9976 |
0.0048 |
1.2288 |
768 |
0.0084 |
0.9976 |
0.9976 |
0.9976 |
0.9976 |
0.0041 |
1.4336 |
896 |
0.0078 |
0.9975 |
0.9975 |
0.9975 |
0.9975 |
0.0015 |
1.6384 |
1024 |
0.0049 |
0.9983 |
0.9983 |
0.9983 |
0.9983 |
0.0047 |
1.8432 |
1152 |
0.0068 |
0.9977 |
0.9977 |
0.9977 |
0.9977 |
0.0012 |
2.048 |
1280 |
0.0075 |
0.9975 |
0.9975 |
0.9975 |
0.9975 |
0.0025 |
2.2528 |
1408 |
0.0095 |
0.9971 |
0.9971 |
0.9971 |
0.9971 |
0.0013 |
2.4576 |
1536 |
0.0084 |
0.9976 |
0.9976 |
0.9976 |
0.9976 |
0.0026 |
2.6624 |
1664 |
0.0056 |
0.9985 |
0.9985 |
0.9985 |
0.9985 |
0.0001 |
2.8672 |
1792 |
0.0096 |
0.9976 |
0.9976 |
0.9976 |
0.9976 |
0.0001 |
3.072 |
1920 |
0.0049 |
0.9987 |
0.9987 |
0.9987 |
0.9987 |
0.0009 |
3.2768 |
2048 |
0.0085 |
0.9978 |
0.9978 |
0.9978 |
0.9978 |
0.0003 |
3.4816 |
2176 |
0.0078 |
0.9980 |
0.9980 |
0.9980 |
0.9980 |
0.0002 |
3.6864 |
2304 |
0.0057 |
0.9985 |
0.9985 |
0.9985 |
0.9985 |
0.0 |
3.8912 |
2432 |
0.0043 |
0.9988 |
0.9988 |
0.9988 |
0.9988 |
0.0 |
4.096 |
2560 |
0.0046 |
0.9987 |
0.9987 |
0.9987 |
0.9987 |
0.0 |
4.3008 |
2688 |
0.0045 |
0.9988 |
0.9988 |
0.9988 |
0.9988 |
0.0 |
4.5056 |
2816 |
0.0046 |
0.9988 |
0.9988 |
0.9988 |
0.9988 |
0.0 |
4.7104 |
2944 |
0.0047 |
0.9988 |
0.9988 |
0.9988 |
0.9988 |
0.0 |
4.9152 |
3072 |
0.0047 |
0.9988 |
0.9988 |
0.9988 |
0.9988 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
đ License
The project uses the apache - 2.0 license.