đ đ mGPT 13B
A multilingual language model trained on 61 languages from 25 language families.
đ Quick Start
This is a multilingual language model trained on 61 languages from 25 language families (see the list below).
⨠Features
- Multilingual Support: Covers 61 languages from 25 language families.
- Large-scale Training: Pretrained on 600Gb of texts.
đĻ Installation
No installation steps provided in the original document.
đģ Usage Examples
No code examples provided in the original document.
đ Documentation
Dataset
The model was pretrained on 600Gb of texts, mostly from MC4 and Wikipedia. The training data was deduplicated. Text deduplication involves 64 - bit hashing of each text in the corpus to keep texts with a unique hash. Documents are also filtered based on their text compression rate using zlib4, and the most strongly and weakly compressing deduplicated texts are discarded.
Here is the table with the number of tokens for each language in the pretraining corpus on a logarithmic scale:

Languages
Afrikaans (af), Arabic (ar), Armenian (hy), Azerbaijani (az), Basque (eu), Bashkir (ba), Belarusian (be), Bengali (bn), Bulgarian (bg), Burmese (my), Buryat (bxr), Chuvash (cv), Danish (da), English (en), Estonian (et), Finnish (fi), French (fr), Georgian (ka), German (de), Greek (el), Hebrew (he), Hindi (hi), Hungarian (hu), Indonesian (id), Italian (it), Japanese (ja), Javanese (jv), Kalmyk (xal), Kazakh (kk), Korean (ko), Kyrgyz (ky), Latvian (lv), Lithuanian (lt), Malay (ms), Malayalam (ml), Marathi (mr), Mongolian (mn), Ossetian (os), Persian (fa), Polish (pl), Portuguese (pt), Romanian (ro), Russian (ru), Spanish (es), Swedish (sv), Swahili (sw), Tatar (tt), Telugu (te), Thai (th), Turkish (tr), Turkmen (tk), Tuvan (tyv), Ukrainian (uk), Uzbek (uz), Vietnamese (vi), Yakut (sax), Yoruba (yo)
By language family
Language Family |
Languages |
Afro - Asiatic |
Arabic (ar), Hebrew (he) |
Austro - Asiatic |
Vietnamese (vi) |
Austronesian |
Indonesian (id), Javanese (jv), Malay (ms), Tagalog (tl) |
Baltic |
Latvian (lv), Lithuanian (lt) |
Basque |
Basque (eu) |
Dravidian |
Malayalam (ml), Tamil (ta), Telugu (te) |
Indo - European (Armenian) |
Armenian (hy) |
Indo - European (Indo - Aryan) |
Bengali (bn), Marathi (mr), Hindi (hi), Urdu (ur) |
Indo - European (Germanic) |
Afrikaans (af), Danish (da), English (en), German (de), Swedish (sv) |
Indo - European (Romance) |
French (fr), Italian (it), Portuguese (pt), Romanian (ro), Spanish (es) |
Indo - European (Greek) |
Greek (el) |
Indo - European (Iranian) |
Ossetian (os), Tajik (tg), Persian (fa) |
Japonic |
Japanese (ja) |
Kartvelian |
Georgian (ka) |
Koreanic |
Korean (ko) |
Kra - Dai |
Thai (th) |
Mongolic |
Buryat (bxr), Kalmyk (xal), Mongolian (mn) |
Niger - Congo |
Swahili (sw), Yoruba (yo) |
Slavic |
Belarusian (be), Bulgarian (bg), Russian (ru), Ukrainian (uk), Polish (pl) |
Sino - Tibetan |
Burmese (my) |
Turkic (Karluk) |
Uzbek (uz) |
Turkic (Kipchak) |
Bashkir (ba), Kazakh (kk), Kyrgyz (ky), Tatar (tt) |
Turkic (Oghuz) |
Azerbaijani (az), Chuvash (cv), Turkish (tr), Turkmen (tk) |
Turkic (Siberian) |
Tuvan (tyv), Yakut (sax) |
Uralic |
Estonian (et), Finnish (fi), Hungarian (hu) |
đ§ Technical Details
The models are pretrained on 16 V100 GPUs for 600k training steps with a set of fixed hyperparameters: vocabulary size of 100k, context window of 2048, learning rate of 2eâ4, and batch size of 4.
The mGPT architecture is based on GPT - 3. We use the architecture description by Brown et al., the code base on GPT - 2 (Radford et al., 2019) in the HuggingFace library (Wolf et al., 2020) and Megatron - LM (Shoeybi et al., 2019).
đ Perplexity
The mGPT13B model achieves the best perplexities within the 2 - to - 10 score range for the majority of languages, including Dravidian (Malayalam, Tamil, Telugu), Indo - Aryan (Bengali, Hindi, Marathi), Slavic (Belarusian, Ukrainian, Russian, Bulgarian), Sino - Tibetan (Burmese), Kipchak (Bashkir, Kazakh) and others. Higher perplexities up to 20 are for only seven languages from different families.
Language - wise perplexity results

Family - wise perplexity results

The scores are averaged over the number of languages within each family.
đ License
The model is released under the MIT license.