đ Fietje 2
Fietje 2 is an open and efficient LLM for Dutch. It's an adapted version of microsoft/phi-2, optimized for Dutch text generation through training on 28B tokens. Despite its relatively small size of 2.7 billion parameters, it performs almost as well as more powerful Dutch LLMs twice its size.
Fietje 2
An open and efficient LLM for Dutch
đąââī¸ Base version (this one) -
đ¤ Instruct version -
đŦ Chat version -
đ GGUF of base
Chat with Fietje here!
⨠Features
- Adapted from microsoft/phi-2 for Dutch text generation.
- Small and efficient with 2.7 billion parameters, performing almost on par with more powerful Dutch LLMs of twice its size.
đ Documentation
A thorough description of the creation and evaluation of Fietje as well as usage examples are available in this Github repository.
đ License
This project is licensed under the MIT license.
đĻ Installation
No installation steps are provided in the original document, so this section is skipped.
đģ Usage Examples
No code examples are provided in the original document, so this section is skipped.
đ§ Technical Details
Citation
If you use Fietje or the CulturaX + Wikipedia filtered subset in your work, please cite to the following paper:
@misc{vanroy2024fietjeopenefficientllm,
title={Fietje: An open, efficient LLM for Dutch},
author={Bram Vanroy},
year={2024},
eprint={2412.15450},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.15450},
}
Intended uses & limitations
The same limitations as phi-2, and LLMs in general, apply here. LLMs hallucinate, make mistakes, and should not be trusted. Use at your own risk!
Training data
Fietje was continue-pretrained on 28B Dutch tokens, which includes the full Dutch component of Wikipedia (accounting for around 15%), supplemented with Dutch tokens from CulturaX. A newer version of this dataset can be found here, which also describes the filtering that took place to ensure high data quality.
Training procedure
I am thankful to the Flemish Supercomputer Center (VSC) for providing the computational power to accomplish this project. Accounting for waiting for jobs, training took around two weeks on four nodes of 4x A100 80GB each (16 total).
Training was done with the wonderful alignment-handbook, using DeepSpeed as a back-end. Exact training recipes and SLURM script are given in the Github repository.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 3
- total_train_batch_size: 1920
- total_eval_batch_size: 640
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07
- lr_scheduler_type: linear
- num_epochs: 1.0
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
1.6334 |
0.13 |
900 |
1.5937 |
1.5469 |
0.26 |
1800 |
1.5051 |
1.4937 |
0.4 |
2700 |
1.4628 |
1.4633 |
0.53 |
3600 |
1.4375 |
1.4485 |
0.66 |
4500 |
1.4203 |
1.4374 |
0.79 |
5400 |
1.4085 |
1.4278 |
0.92 |
6300 |
1.4013 |
Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
đ Model Information
Property |
Details |
Model Type |
Fietje 2 |
Base Model |
microsoft/phi-2 |
Training Data |
uonlp/CulturaX, wikimedia/wikipedia, BramVanroy/wikipedia_culturax_dutch |
Pipeline Tag |
text-generation |
Inference |
false |
Model Index Name |
fietje-2 |
â ī¸ Important Note
The same limitations as phi-2, and LLMs in general, apply here. LLMs hallucinate, make mistakes, and should not be trusted. Use at your own risk!