Roberta Base
R
Roberta Base
Developed by klue
A RoBERTa model pretrained on Korean, suitable for various Korean natural language processing tasks.
Downloads 1.2M
Release Time : 3/2/2022
Model Overview
KLUE RoBERTa Base is a RoBERTa model pretrained on Korean, primarily used for understanding and processing Korean text.
Model Features
Korean Pretraining
Specifically pretrained for Korean, optimizing Korean text processing capabilities.
Based on RoBERTa Architecture
Adopts the RoBERTa architecture, offering powerful text representation capabilities.
Compatible with BertTokenizer
Uses BertTokenizer instead of RobertaTokenizer for easier compatibility with other BERT models.
Model Capabilities
Text Classification
Named Entity Recognition
Sentiment Analysis
Question Answering
Text Generation
Use Cases
Natural Language Processing
Korean Text Classification
Used for classifying Korean text, such as news categorization and sentiment analysis.
Korean Named Entity Recognition
Identifies named entities in Korean text, such as person names, locations, and organization names.
Featured Recommended AI Models
Š 2025AIbase