Model Overview
Model Features
Model Capabilities
Use Cases
🚀 MultiSlav BiDi Models
A collection of Encoder-Decoder vanilla transformer models for bi-directional machine translation across multiple Slavic languages.
🚀 Quick Start
Prerequisites
To use a BiDi model, you must provide the target language for translation. Target language tokens are represented as 3-letter ISO 639-3 language codes embedded in a format >>xxx<<.
Example Code
from transformers import AutoTokenizer, MarianMTModel
source_lang = "pol"
target_lang = "ces"
first_lang, second_lang = sorted([source_lang, target_lang])
model_name = f"Allegro/BiDi-{first_lang}-{second_lang}"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
text = f">>{target_lang}<<" + " " + "Allegro to internetowa platforma e-commerce, na której swoje produkty sprzedają średnie i małe firmy, jak również duże marki."
batch_to_translate = [text]
translations = model.generate(**tokenizer.batch_encode_plus(batch_to_translate, return_tensors="pt"))
decoded_translation = tokenizer.batch_decode(translations, skip_special_tokens=True, clean_up_tokenization_spaces=True)[0]
print(decoded_translation)
Generated Czech output:
Allegro je online e-commerce platforma, na které své výrobky prodávají střední a malé firmy, stejně jako velké značky.
✨ Features
Multilingual BiDi MT Models
BiDi is a collection of Encoder-Decoder vanilla transformer models trained on sentence-level Machine Translation task. Each model supports Bi-Directional translation.
Supported Languages
Target Language | First token |
---|---|
Czech | >>ces<< |
English | >>eng<< |
Polish | >>pol<< |
Slovak | >>slk<< |
Slovene | >>slv<< |
Bi-Di Models Available
We provided 10 BiDi models, allowing to translate between 20 languages.
Bi-Di model | Languages supported | HF repository |
---|---|---|
BiDi-ces-eng | Czech ↔ English | allegro/BiDi-ces-eng |
BiDi-ces-pol | Czech ↔ Polish | allegro/BiDi-ces-pol |
BiDi-ces-slk | Czech ↔ Slovak | allegro/BiDi-ces-slk |
BiDi-ces-slv | Czech ↔ Slovene | allegro/BiDi-ces-slv |
BiDi-eng-pol | English ↔ Polish | allegro/BiDi-eng-pol |
BiDi-eng-slk | English ↔ Slovak | allegro/BiDi-eng-slk |
BiDi-eng-slv | English ↔ Slovene | allegro/BiDi-eng-slv |
BiDi-pol-slk | Polish ↔ Slovak | allegro/BiDi-pol-slk |
BiDi-pol-slv | Polish ↔ Slovene | allegro/BiDi-pol-slv |
BiDi-slk-slv | Slovak ↔ Slovene | allegro/BiDi-slk-slv |
📦 Installation
No specific installation steps provided in the original README.
💻 Usage Examples
Basic Usage
from transformers import AutoTokenizer, MarianMTModel
source_lang = "pol"
target_lang = "ces"
first_lang, second_lang = sorted([source_lang, target_lang])
model_name = f"Allegro/BiDi-{first_lang}-{second_lang}"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
text = f">>{target_lang}<<" + " " + "Allegro to internetowa platforma e-commerce, na której swoje produkty sprzedają średnie i małe firmy, jak również duże marki."
batch_to_translate = [text]
translations = model.generate(**tokenizer.batch_encode_plus(batch_to_translate, return_tensors="pt"))
decoded_translation = tokenizer.batch_decode(translations, skip_special_tokens=True, clean_up_tokenization_spaces=True)[0]
print(decoded_translation)
📚 Documentation
Training
SentencePiece tokenizer has a vocab size 32k in total (16k per language). Tokenizer was trained on randomly sampled part of the training corpus. During the training we used the MarianNMT framework. Base marian configuration used: transfromer-big.
Training hyperparameters:
Hyperparameter | Value |
---|---|
Total Parameter Size | 209M |
Vocab Size | 32k |
Base Parameters | Marian transfromer-big |
Number of Encoding Layers | 6 |
Number of Decoding Layers | 6 |
Model Dimension | 1024 |
FF Dimension | 4096 |
Heads | 16 |
Dropout | 0.1 |
Batch Size | mini batch fit to VRAM |
Training Accelerators | 4x A100 40GB |
Max Length | 100 tokens |
Optimizer | Adam |
Warmup steps | 8000 |
Context | Sentence-level MT |
Languages Supported | See Bi-Di models available |
Precision | float16 |
Validation Freq | 3000 steps |
Stop Metric | ChrF |
Stop Criterion | 20 Validation steps |
Training Corpora
The main research question was: "How does adding additional, related languages impact the quality of the model?" - we explored it in the Slavic language family. BiDi models are our baseline before expanding the data-regime by using higher-level multilinguality.
Datasets were downloaded via MT-Data library. The number of total examples post filtering and deduplication varies, depending on languages supported.
Language pair | Number of training examples |
---|---|
Czech ↔ Polish | 63M |
Czech ↔ Slovak | 30M |
Czech ↔ Slovene | 25M |
Polish ↔ Slovak | 26M |
Polish ↔ Slovene | 23M |
Slovak ↔ Slovene | 18M |
Czech ↔ English | 151M |
English ↔ Polish | 150M |
English ↔ Slovak | 52M |
English ↔ Slovene | 40M |
The datasets used (only applicable to specific directions):
Corpus |
---|
paracrawl |
opensubtitles |
multiparacrawl |
dgt |
elrc |
xlent |
wikititles |
wmt |
wikimatrix |
dcep |
ELRC |
tildemodel |
europarl |
eesc |
eubookshop |
emea |
jrc_acquis |
ema |
qed |
elitr_eca |
EU-dcep |
rapid |
ecb |
kde4 |
news_commentary |
kde |
bible_uedin |
europat |
elra |
wikipedia |
wikimedia |
tatoeba |
globalvoices |
euconst |
ubuntu |
php |
ecdc |
eac |
eac_reference |
gnome |
EU-eac |
books |
EU-ecdc |
newsdev |
khresmoi_summary |
czechtourism |
khresmoi_summary_dev |
worldbank |
Evaluation
Evaluation of the models was performed on Flores200 dataset. The table below compares performance of the open-source models and all applicable models from our collection. Metric used: Unbabel/wmt22-comet-da.
Direction | CES → ENG | CES → POL | CES → SLK | CES → SLV | ENG → CES | ENG → POL | ENG → SLK | ENG → SLV | POL → CES | POL → ENG | POL → SLK | POL → SLV | SLK → CES | SLK → ENG | SLK → POL | SLK → SLV | SLV → CES | SLV → ENG | SLV → POL | SLV → SLK |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
M2M-100 | 87.0 | 89.0 | 92.1 | 89.7 | 88.6 | 86.4 | 88.4 | 87.3 | 89.6 | 84.6 | 89.4 | 88.4 | 92.7 | 86.8 | 89.1 | 89.6 | 90.3 | 86.4 | 88.7 | 90.1 |
NLLB-200 | 88.1 | 88.9 | 91.2 | 88.6 | 90.4 | 88.5 | 90.1 | 88.8 | 89.4 | 85.8 | 88.9 | 87.7 | 91.8 | 88.2 | 88.9 | 88.8 | 90.0 | 87.5 | 88.6 | 89.4 |
Seamless-M4T | 87.5 | 80.9 | 90.8 | 82.0 | 90.7 | 88.5 | 90.6 | 89.6 | 79.6 | 85.4 | 80.0 | 76.4 | 91.5 | 87.2 | 81.2 | 82.9 | 80.9 | 87.3 | 76.7 | 81.0 |
OPUS-MT Sla-Sla | 88.2 | 82.8 | - | 83.4 | 89.1 | 85.6 | - | 84.5 | 82.9 | 82.2 | - | 81.2 | - | - | - | - | 83.5 | 84.1 | 80.8 | - |
OPUS-MT SK-EN | - | - | - | - | - | - | 89.5 | - | - | - | - | - | - | 88.4 | - | - | - | - | - | - |
Our contributions: | ||||||||||||||||||||
BiDi Models* | 87.5 | 89.4 | 92.4 | 89.8 | 87.8 | 86.2 | 87.2 | 86.6 | 90.0 | 85.0 | 89.1 | 88.4 | 92.9 | 87.3 | 88.8 | 89.4 | 90.0 | 86.9 | 88.1 | 89.1 |
P4-pol◊ | - | 89.6 | 90.8 | 88.7 | - | - | - | - | 90.2 | - | 89.8 | 88.7 | 91.0 | - | 89.3 | 88.4 | 89.3 | - | 88.7 | 88.5 |
P5-eng◊ | 88.0 | 89.0 | 90.7 | 89.0 | 88.8 | 87.3 | 88.4 | 87.5 | 89.0 | 85.7 | 88.5 | 87.8 | 91.0 | 88.2 | 88.6 | 88.5 | 89.6 | 87.2 | 88.4 | 88.9 |
P5-ces◊ | 87.9 | 89.6 | 92.5 | 89.9 | 88.4 | 85.0 | 87.9 | 85.9 | 90.3 | 84.5 | 89.5 | 88.0 | 93.0 | 87.8 | 89.4 | 89.8 | 90.3 | 85.7 | 87.9 | 89.8 |
MultiSlav-4slav | - | 89.7 | 92.5 | 90.0 | - | - | - | - | 90.2 | - | 89.6 | 88.7 | 92.9 | - | 89.4 | 90.1 | 90.6 | - | 88.9 | 90.2 |
MultiSlav-5lang | 87.8 | 89.8 | 92.5 | 90.1 | 88.9 | 86.9 | 88.0 | 87.3 | 90.4 | 85.4 | 89.8 | 88.9 | 92.9 | 87.8 | 89.6 | 90.2 | 90.6 | 87.0 | 89.2 | 90.2 |
◊ system of 2 models Many2XXX and XXX2Many, see P5-ces2many
* results combined for all bi-directional models; each values for applicable model
Limitations and Biases
We did not evaluate inherent bias contained in training datasets. It is advised to validate bias of our models in perspective domain. This might be especially problematic in translation from English to Slavic languages, which require explicitly indicated gender and might hallucinate based on bias present in training data.
🔧 Technical Details
- SentencePiece tokenizer with a vocab size of 32k (16k per language) was used.
- The MarianNMT framework was used for training with the base configuration transfromer-big.
📄 License
The model is licensed under CC BY 4.0, which allows for commercial use.
Citation
TO BE UPDATED SOON 🤗
Contact Options
Authors:
- MLR @ Allegro: Artur Kot, Mikołaj Koszowski, Wojciech Chojnowski, Mieszko Rutkowski
- Laniqo.com: Artur Nowakowski, Kamil Guttmann, Mikołaj Pokrywka
Please don't hesitate to contact authors if you have any questions or suggestions:
- e-mail: artur.kot@allegro.com or mikolaj.koszowski@allegro.com
- LinkedIn: Artur Kot or Mikołaj Koszowski

