A

Afriberta Small

Developed by castorini
AfriBERTa Small is a 97-million-parameter multilingual pretrained model supporting 11 African languages, suitable for tasks like text classification and named entity recognition.
Downloads 160
Release Time : 3/2/2022

Model Overview

This model is a multilingual pretrained model optimized for African languages, excelling in low-resource language environments and particularly suitable for NLP tasks involving African languages.

Model Features

Multilingual support
Specially optimized for 11 African languages including low-resource languages like Oromo and Amharic
Lightweight design
Compact model with only 97 million parameters, suitable for deployment in resource-constrained environments
Cross-lingual generalization
Demonstrates competitive advantages even on African languages not included in pretraining

Model Capabilities

Text classification
Named entity recognition
Multilingual text processing

Use Cases

News analysis
African news classification
Classifying multilingual news content from Africa
Performs well on BBC News data
Language processing
Low-resource language NER
Named entity recognition for low-resource African languages
Outperforms similar models on untrained languages
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase