đ Viking 7B
Viking 7B is a 7B-parameter decoder-only transformer. It's pretrained on Finnish, English, Swedish, Danish, Norwegian, Icelandic languages, and code, with 2 trillion tokens. It's a fully open-source model, available under the Apache 2.0 License.
This project is a part of an ongoing initiative to develop open-source large language models for non-English, especially low-resource languages like Finnish. The model is proficient in Finnish, English, Scandinavian languages and can perform basic translations among them. It can also understand and generate code.
đ Quick Start
Viking 7B is a powerful language model. For basic usage, you can load a checkpoint as shown in the "Training Checkpoints" section.
⨠Features
- Multilingual Proficiency: Fluent in Finnish, English, Swedish, Danish, Norwegian, Icelandic and capable of basic translation between them.
- Code Understanding and Generation: Can understand and generate code.
- Open Source: Released under the Apache 2.0 License.
đĻ Installation
No installation steps are provided in the original document, so this section is skipped.
đģ Usage Examples
Basic Usage
branch = "2000B"
model = transformers.AutoModelForCausalLM.from_pretrained(
"LumiOpen/Viking-7B",
torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16,
revision=branch,
)
đ Documentation
Model Family
Viking is the second set of models released by LumiOpen and is available at 3 parameter counts:
Model Overview
NOTE: Viking is a base model which needs further fine tuning for most use cases.
Viking is a generative pretrained transformer using a LLaMA-like GPT architecture, and makes use of rotary positional embeddings and flash attention.
Property |
Details |
n_parameters |
7.55B |
n_layers |
32 |
n_heads |
32 |
d_model |
4096 |
vocab_size |
131072 |
sequence_length |
4096 |
Training
Viking 7B was trained on the LUMI supercomputer, using 256 AMD MI250X GPUs. Each MI250X GPU has two Graphics Complex Dies (GCDs) for a world size of 512 during training, using activation checkpointing, a micro batch size of 1, gradient accumulation of 16, and a 3D parallelism strategy of TP = 1, PP = 4, DP = 128.
Training began in September 2023 using a custom fork of the Megatron-Deepspeed framework.
Training Hyperparameters
Property |
Details |
Comment |
Precision |
bfloat16 |
|
Optimizer |
AdamW |
|
Learning rate |
3e - 4 |
10B tokens warm-up, cosine decay to 3e - 5 |
Weight decay |
1e - 1 |
|
Batch size |
1024 |
1024 samples x 4096 tokens = 4194304 tokens |
Tokenizer
Viking uses a custom 128K Bloom tokenizer trained on the same English, Finnish, Swedish, Danish, Norwegian, Icelandic and code dataset used to train the model.
Dataset
Viking is being trained on a 2 trillion token mixed dataset of English, Finnish, Swedish, Danish, Norwegian, Icelandic and code. More details on the exact dataset will be published soon.
Evaluation Results
Full evaluation results will be published with the final model.
Training Checkpoints
Training Checkpoints are available as branches in the repository. Checkpoints will be released roughly every 100B tokens. The main branch will always point to the latest checkpoint. The following checkpoints are available:
Ethical Considerations and Limitations
Viking 7B is a release of a partially trained model, and special care should be taken when using any output.
Viking is an advanced language model, primarily optimized for English, Finnish, Swedish, Norwegian, Danish, Icelandic and code, with no meaningful proficiency in any other languages. As with most AI-driven systems, Viking is a product of the vast data it has been trained on, which may reflect the imperfections, biases, and idiosyncrasies of the wider web. Viking may, at times, produce outputs that can be considered inaccurate, prejudiced, or controversial. Users and developers engaging with Viking should exercise discretion and consider additional evaluation and customization to ensure the model's responses align with their specific needs and ethical standards.
đ License
Viking is released under the Apache 2.0 license.