đ InkubaLM-0.4B: Small language model for low-resource African Languages
InkubaLM-0.4B is a small language model trained for five African languages, along with English and French, aiming to facilitate research on low - resource African languages.

đ Documentation
Model Details
InkubaLM was trained from scratch using 1.9 billion tokens of data for five African languages, combined with English and French data, making a total of 2.4 billion tokens. Similar to the MobileLLM architecture, InkubaLM was trained with a parameter size of 0.4 billion and a vocabulary size of 61788. For detailed information on training, benchmarks, and performance, please refer to our full blog post.
Model Description
- Developed by: Lelapa AI - Fundamental Research Team.
- Model type: Small Language Model (SLM) for five African languages built using the architecture design of LLaMA - 7B.
- Language(s) (NLP): isiZulu, Yoruba, Swahili, isiXhosa, Hausa, English and French.
- License: CC BY - NC 4.0.
Model Sources
đ Quick Start
Installation
Use the following command to install the necessary library:
pip install transformers
Usage Examples
Basic Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("lelapa/InkubaLM-0.4B",trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("lelapa/InkubaLM-0.4B",trust_remote_code=True)
text = "Today I planned to"
inputs = tokenizer(text, return_tensors="pt")
input_ids = inputs.input_ids
attention_mask = inputs.attention_mask
outputs = model.generate(input_ids, attention_mask=attention_mask, max_length=60,pad_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Advanced Usage
Using full precision
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lelapa/InkubaLM-0.4B", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("lelapa/InkubaLM-0.4B", trust_remote_code=True)
model.to('cuda')
text = "Today i planned to "
input_ids = tokenizer(text, return_tensors="pt").to('cuda').input_ids
outputs = model.generate(input_ids, max_length=1000, repetition_penalty=1.2, pad_token_id=tokenizer.eos_token_id)
print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())
Using torch.bfloat16
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
checkpoint = "lelapa/InkubaLM-0.4B"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto",torch_dtype=torch.bfloat16, trust_remote_code=True)
inputs = tokenizer.encode("Today i planned to ", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Using quantized Versions via bitsandbytes
First, install the necessary libraries:
pip install bitsandbytes accelerate
Then, use the following code:
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
checkpoint = "lelapa/InkubaLM-0.4B"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, quantization_config=quantization_config, trust_remote_code=True)
inputs = tokenizer.encode("Today i planned to ", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
đ§ Technical Details
Training Data
Training Hyperparameters
Property |
Details |
Total Parameters |
0.422B |
Hidden Size |
2048 |
Intermediate Size (MLPs) |
5632 |
Number of Attention Heads |
32 |
Number of Hidden Layers |
8 |
RMSNorm É |
1e^ - 5 |
Max Seq Length |
2048 |
Vocab Size |
61788 |
â ī¸ Limitations
The InkubaLM model was trained on multilingual datasets but has some limitations. It can understand and generate content in five African languages: Swahili, Yoruba, Hausa, isiZulu, and isiXhosa, as well as English and French. However, the generated content may not always be entirely accurate, logically consistent, or free from biases in the training data. Additionally, the model may use different languages when generating text. Nevertheless, this model is intended as a foundational tool to aid research in African languages.
â Ethical Considerations and Risks
InkubaLM is a small LM developed for five African languages. The model is evaluated only in sentiment analysis, machine translation, AfriMMLU, and AfriXNLI tasks and has yet to cover all possible evaluation scenarios. Similar to other language models, it is impossible to predict all of InkubaLM's potential outputs in advance, and in some cases, the model may produce inaccurate, biased, or objectionable responses. Therefore, before using the model in any application, users should conduct safety testing and tuning tailored to their intended use.
đ License
This model is released under the CC BY - NC 4.0 license.
đ Citation
@article{tonja2024inkubalm,
title={InkubaLM: A small language model for low-resource African languages},
author={Tonja, Atnafu Lambebo and Dossou, Bonaventure FP and Ojo, Jessica and Rajab, Jenalea and Thior, Fadel and Wairagala, Eric Peter and Anuoluwapo, Aremu and Moiloa, Pelonomi and Abbott, Jade and Marivate, Vukosi and others},
journal={arXiv preprint arXiv:2408.17024},
year={2024}
}
đĨ Model Card Authors
Lelapa AI - Fundamental Research Team
đ Model Card Contact
Lelapa AI