Roberta Base Serbian
This is a Serbian (Cyrillic and Latin scripts) RoBERTa model pretrained on srWaC, suitable for downstream task fine-tuning.
Downloads 20
Release Time : 4/17/2022
Model Overview
This model is a Serbian pretrained language model based on RoBERTa architecture, supporting both Cyrillic and Latin scripts, and can be used for tasks like masked language modeling.
Model Features
Dual-script support
Supports both Cyrillic and Latin scripts for Serbian
Pretraining foundation
Pretrained on srWaC corpus
Downstream task adaptation
Can be fine-tuned for tasks like POS tagging and dependency parsing
Model Capabilities
Masked language modeling
Language representation learning
Use Cases
Natural Language Processing
POS tagging
Fine-tune the model for Serbian part-of-speech tagging
Dependency parsing
Can be fine-tuned for Serbian syntactic analysis
Featured Recommended AI Models