🚀 Meltemi: A large foundation Language Model for the Greek language
Meltemi is a large foundation language model developed by the Institute for Language and Speech Processing at Athena Research & Innovation Center. It's built upon Mistral 7B, extending its capabilities for the Greek language through continual pretraining on a vast corpus of high - quality and locally relevant Greek texts. This README provides detailed information about Meltemi 7B v1.5 and its instruction - fine - tuned version Meltemi 7B Instruct v1.5.

✨ Features
- Vocabulary Extension: The Mistral 7B tokenizer is extended with Greek tokens, resulting in lower costs and faster inference for Greek texts (1.52 vs. 6.80 tokens/word for Greek).
- Long Context Length: It supports a context length of 8192.
- Enhanced Greek Proficiency: Through pretraining on a large corpus of approximately 55 billion tokens, Meltemi 7B v1.5 has enhanced proficiency in the Greek language. The corpus includes 43.3 billion monolingual Greek tokens from public resources, 10.5 billion monolingual English tokens, and 600 million tokens from Greek - English parallel data.
📦 Installation
No installation steps are provided in the original document, so this section is skipped.
💻 Usage Examples
Basic Usage
⚠️ Important Note
Please make sure that the BOS token is always included in the tokenized prompts. This might not be the default setting in all evaluation or fine - tuning frameworks.
📚 Documentation
Model Information
The following table lists the number and percentage of tokens for pretraining Meltemi 7B v1.5 (the respective values for Meltemi 7B v1 are in parentheses):
Property |
Details |
Model Type |
Meltemi 7B v1.5, an extension of Mistral 7B for Greek |
Training Data |
|
Sub - corpus |
# Tokens |
Greek |
43,383,244,502 (28,555,902,360) |
English |
10,538,413,259 (10,478,414,033) |
Parallel |
633,816,023 (633,816,023) |
Total |
54,555,473,784 (39,668,132,416) |
Meltemi 7B v1.5 was trained for less than 2/3rds of the training steps of Meltemi 7B v1.
Evaluation
The evaluation suite is based on a fork of the lighteval framework and includes 6 test sets. The evaluation is performed in a few - shot setting, consistent with the Open LLM leaderboard.
The differences in the Meltemi 7B v1 scores compared to the ones here can be attributed to a different - and better optimized - evaluation setup for Greek, i.e., lighteval vs. [lm - eval - harness](https://github.com/EleutherAI/lm - evaluation - harness).
The evaluation suite includes:
The results for the Greek test sets are shown in the following table:
|
Medical MCQA EL (15 - shot) |
Belebele EL (5 - shot) |
HellaSwag EL (10 - shot) |
ARC - Challenge EL (25 - shot) |
TruthfulQA MC2 EL (0 - shot) |
MMLU EL (5 - shot) |
Average |
Mistral 7B |
29.8% |
45.0% |
36.5% |
27.1% |
45.8% |
35% |
36.5% |
Meltemi 7B v1 |
46.3% |
68.5% |
63.3% |
43.6% |
44.6% |
42.4% |
51.4% |
Meltemi 7B v1.5 |
48.1% |
68.6% |
65.7% |
47.1% |
45.1% |
42.4% |
52.8% |
Ethical Considerations
This model has been aligned with human preferences, but might generate misleading, harmful, and toxic content.
Acknowledgements
The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre - project.eu/), providing Amazon Web Services for the Greek Academic and Research Community.
Citation
@misc{voukoutis2024meltemiopenlargelanguage,
title={Meltemi: The first open Large Language Model for Greek},
author={Leon Voukoutis and Dimitris Roussis and Georgios Paraskevopoulos and Sokratis Sofianopoulos and Prokopis Prokopidis and Vassilis Papavasileiou and Athanasios Katsamanis and Stelios Piperidis and Vassilis Katsouros},
year={2024},
eprint={2407.20743},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.20743},
}
📄 License
This model is licensed under the apache - 2.0 license.