R

Roberta TR Medium Wp 44k

Developed by ctoraman
A RoBERTa model for Turkish language, pre-trained with masked language modeling objective, case-insensitive, suitable for Turkish text processing tasks.
Downloads 84
Release Time : 3/9/2022

Model Overview

This model is a RoBERTa variant optimized for Turkish, using a WordPiece tokenizer with a vocabulary size of 44.5k. The architecture resembles bert-medium, featuring 8 layers and 8 attention heads with a hidden layer size of 512.

Model Features

Turkish language optimization
Specifically pre-trained and optimized for Turkish language
WordPiece tokenization
Uses a WordPiece tokenizer with 44.5k vocabulary
Medium-sized architecture
Adopts a lightweight architecture with 8 layers and 8 attention heads, hidden layer size of 512
Case-insensitive
The model is case-insensitive, suitable for processing Turkish texts in various case forms

Model Capabilities

Turkish text understanding
Masked language modeling
Sequence classification

Use Cases

Natural Language Processing
Turkish text classification
Can be used for sentiment analysis, topic classification and other tasks in Turkish text
Turkish language understanding
Suitable for various application scenarios requiring comprehension of Turkish texts
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase