R

Roberta Base Turkish Uncased

Developed by burakaytan
RoBERTa base model pre-trained on Turkish, trained with 38GB of Turkish corpus
Downloads 57
Release Time : 4/20/2022

Model Overview

This is a RoBERTa base model for Turkish, primarily used for masked language modeling tasks, supporting Turkish text understanding and generation.

Model Features

Large-scale Turkish pre-training
Trained with 38GB of Turkish corpus (including Wikipedia, OSCAR corpus, and news website data)
High-performance hardware training
Training completed on high-performance hardware equipped with Intel Xeon Gold processors and Tesla V100 GPUs
Optimized Turkish language processing
Specifically optimized for Turkish language characteristics, enabling better handling of Turkish text

Model Capabilities

Turkish text understanding
Masked language modeling
Text completion
Semantic analysis

Use Cases

Text completion
Cloze application
Predict masked words in sentences
Accurately predicts key masked words in Turkish sentences
Semantic analysis
Text similarity calculation
Calculate semantic similarity between Turkish texts
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase