đ GEITje-7B-chat-v2
GEITje-7B-chat-v2 is a large open Dutch language model based on Mistral 7B, further trained on Dutch texts to enhance its Dutch language skills and knowledge of Dutch topics. However, due to a request, it's no longer available.
â ī¸ Important Note
At the pressing request of Stichting BREIN, GEITje is no longer available, starting immediately. All model files (the weights) and checkpoints have been deleted from this repo. See my blog post (Dutch, English) for further clarification.
đĄ Usage Tip
Try the chat model in đ¤ Hugging Face Spaces!
⨠Features
Base Model: Mistral 7B
GEITje is based on Mistral 7B, a large open language model with 7 billion parameters trained by Mistral AI. According to Mistral AI, the 7B model outperforms Llama 2 13B on all (English-language) benchmarks they tested it on. Mistral 7B is released under the Apache 2.0 open source license.
Further Training on Dutch Texts
GEITje was created by further training Mistral 7B on at least 10 billion tokens of Dutch text from the Dutch Gigacorpus and the MADLAD-400 web crawling corpus. It's a full-parameter finetune, not a PEFT or LoRA finetune. Like Mistral, GEITje has a context length of 8,192 tokens.
Finetuned for Dialogues
Two initial chat variants, GEITje-chat and GEITje-chat-v2, have been finetuned to demonstrate GEITje's capabilities for chat applications. They can follow instructions, answer questions, and hold dialogues on various topics.
đ Documentation
Model description
GEITje-7B is a large open Dutch language model with 7 billion parameters, based on Mistral 7B. It has been further trained on 10 billion tokens of Dutch text, improving its Dutch language skills and increasing its knowledge of Dutch topics.
More info
Read more about GEITje-chat in the đ README on GitHub.
Checkpoints
An intermediate checkpoint is available in the checkpoints
branch.
đ§ Technical Details
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
0.7832 |
0.05 |
609 |
0.8844 |
0.6904 |
0.1 |
1218 |
0.8698 |
0.8195 |
0.15 |
1827 |
0.8583 |
0.7463 |
0.2 |
2436 |
0.8475 |
0.6739 |
0.25 |
3045 |
0.8395 |
0.7604 |
0.3 |
3654 |
0.8332 |
0.8024 |
0.35 |
4263 |
0.8261 |
0.6881 |
0.4 |
4872 |
0.8203 |
0.6466 |
0.45 |
5481 |
0.8167 |
0.7042 |
0.5 |
6090 |
0.8121 |
0.702 |
0.55 |
6699 |
0.8081 |
0.7255 |
0.6 |
7308 |
0.8054 |
0.7558 |
0.65 |
7917 |
0.8036 |
0.7587 |
0.7 |
8526 |
0.8022 |
0.9217 |
0.75 |
9135 |
0.8016 |
0.6938 |
0.8 |
9744 |
0.8011 |
0.6962 |
0.85 |
10353 |
0.8011 |
0.664 |
0.9 |
10962 |
0.8011 |
0.6544 |
0.95 |
11571 |
0.8011 |
0.6782 |
1.0 |
12180 |
0.8011 |
Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
đ License
This project is released under the Apache 2.0 license.