đ GEITje-7B-chat
GEITje-7B-chat is a large open Dutch language model. It's based on Mistral 7B and further trained on Dutch text, enhancing its Dutch language skills and knowledge of Dutch topics. However, due to a request, the model is no longer available.
â ī¸ Important Note
At the pressing request of Stichting BREIN, GEITje is no longer available, starting immediately. All model files (the weights) and checkpoints have been deleted from this repo. See my blog post (Dutch, English) for further clarification.
đ Check out GEITje-7b-chat-v2 đ
⨠Features
Model Structure
- Base Model - Mistral 7B: GEITje is built upon Mistral 7B, a large open language model with 7 billion parameters trained by Mistral AI. It outperforms Llama 2 13B on English-language benchmarks and is released under the Apache 2.0 open source license.
- Dutch Text Training - GEITje: GEITje is created by further training Mistral 7B on 10 billion tokens of Dutch text from the Dutch Gigacorpus and the MADLAD-400 web crawling corpus. It's a full-parameter finetune with a context length of 8,192 tokens.
- Dialogue Finetuning - GEITje-chat: Two initial chat variants, GEITje-chat and GEITje-chat-v2, are finetuned for chat applications, capable of following instructions, answering questions, and holding dialogues on various topics.
Performance Improvement
The additional training on Dutch text has improved GEITje's Dutch language skills and increased its knowledge of Dutch topics.
đ Documentation
Model description
Mistral â Base Model
GEITje is based on Mistral 7B. It's a large open language model with 7 billion parameters, trained by Mistral AI. According to Mistral AI, the 7B model performs better than Llama 2 13B on all (English-language) benchmarks they tested it on. Mistral 7B has been released under the Apache 2.0 open source license.
GEITje â Trained Further on Dutch Texts
GEITje was created by further training Mistral 7B on no less than 10 billion tokens of Dutch text from the Dutch Gigacorpus and the MADLAD-400 web crawling corpus. It is a so-called full-parameter finetune: performed on all parameters. It is not a PEFT or LoRA finetune. Like Mistral, GEITje has a context length of 8,192 tokens.
GEITje-chat â Finetuned for Dialogues
As a demonstration of GEITje's capabilities for chat applications, two initial chat variants of GEITje have also been finetuned: GEITje-chat and GEITje-chat-v2. They can follow instructions, answer questions, and hold dialogues on a variety of topics.
More info
Read more about GEITje-chat in the đ README on GitHub.
Checkpoints
Intermediate checkpoints are available in the checkpoints
branch.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
1.0263 |
0.2 |
236 |
0.9482 |
1.0368 |
0.4 |
472 |
0.9574 |
0.9503 |
0.6 |
708 |
0.9492 |
1.1419 |
0.8 |
944 |
0.9406 |
1.2161 |
1.0 |
1180 |
0.9317 |
0.6695 |
1.2 |
1416 |
0.9407 |
0.7379 |
1.4 |
1652 |
0.9350 |
0.7695 |
1.6 |
1888 |
0.9282 |
0.6795 |
1.8 |
2124 |
0.9218 |
0.6217 |
2.0 |
2360 |
0.9174 |
0.438 |
2.2 |
2596 |
0.9546 |
0.3719 |
2.39 |
2832 |
0.9546 |
0.4853 |
2.59 |
3068 |
0.9548 |
0.3852 |
2.79 |
3304 |
0.9548 |
0.48 |
2.99 |
3540 |
0.9548 |
Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
đ License
This project is licensed under the Apache 2.0 license.