đ bert-base-uncased-qqp
This model is a fine - tuned version of [bert - base - uncased](https://huggingface.co/bert - base - uncased) on the GLUE QQP dataset. It can effectively perform tasks such as text classification and natural language inference, providing high - accuracy results on the evaluation set.
đ Quick Start
This section provides a brief introduction to the model. The bert - base - uncased - qqp
model is fine - tuned from the bert - base - uncased
model on the GLUE QQP dataset. It achieves the following results on the evaluation set:
- Loss: 0.2829
- Accuracy: 0.9100
- F1: 0.8788
- Combined Score: 0.8944
⨠Features
- Fine - tuned from BERT: Based on the powerful
bert - base - uncased
model, it is further optimized for the GLUE QQP dataset.
- High performance: Achieves excellent results in accuracy, F1 score, etc. on the evaluation set.
- Multiple tasks: Suitable for tasks such as text classification and natural language inference.
đĻ Installation
No specific installation steps are provided in the original document, so this section is skipped.
đģ Usage Examples
No code examples are provided in the original document, so this section is skipped.
đ Documentation
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e - 05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon = 1e - 08
- lr_scheduler_type: linear
- num_epochs: 3.0
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
Accuracy |
F1 |
Combined Score |
0.2511 |
1.0 |
11371 |
0.2469 |
0.8969 |
0.8641 |
0.8805 |
0.1763 |
2.0 |
22742 |
0.2379 |
0.9071 |
0.8769 |
0.8920 |
0.1221 |
3.0 |
34113 |
0.2829 |
0.9100 |
0.8788 |
0.8944 |
Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
đ§ Technical Details
The model is fine - tuned on the GLUE QQP dataset based on the bert - base - uncased
model. During training, specific hyperparameters are used, and the training results show that the model's performance gradually improves with the increase of epochs.
đ License
This model is released under the Apache 2.0 license.
đ Model Index
Property |
Details |
Model Name |
bert - base - uncased - qqp |
Base Model |
bert - base - uncased |
Task Type |
Text Classification, Natural Language Inference |
Dataset |
GLUE QQP |
Metrics |
Accuracy, F1, Precision, Recall, AUC, Loss |
Accuracy |
0.9099925797674994 (in some cases), 0.9100 (evaluation set) |
F1 |
0.8788252139455897 (in some cases), 0.8788 (evaluation set) |
Loss |
0.28284332156181335 (in some cases), 0.2829 (evaluation set) |