R

Roberta Small Greek

Developed by ClassCat
This is a small Greek language model based on the RoBERTa architecture, with a parameter scale approximately half of the base model. It is suitable for the masked filling task of Greek text.
Downloads 22
Release Time : 7/20/2022

Model Overview

This model is a small Greek language model based on the RoBERTa architecture, mainly used for the masked filling task of Greek text. It uses a BPE tokenizer with a vocabulary size of 50,000. The training data includes subsets of CC-100/el, OSCAR, and the Greek Wikipedia.

Model Features

Miniaturized design
The parameter scale is approximately half of the RoBERTa base model, suitable for environments with limited resources.
Greek support
Specifically optimized for Greek text, suitable for natural language processing tasks in Greek.
BPE tokenizer
Uses a BPE tokenizer with a vocabulary size of 50,000, capable of effectively processing Greek text.

Model Capabilities

Greek text masked filling

Use Cases

Natural language processing
Text completion
Used to complete the missing parts in Greek text.
Language model pre-training
As a pre-training basis for Greek language models.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase