C

Chinese Roberta L 8 H 512

Developed by uer
A Chinese RoBERTa model pre-trained on CLUECorpusSmall, with a parameter scale of 8 layers and 512 hidden units, supporting masked language modeling tasks.
Downloads 76
Release Time : 3/2/2022

Model Overview

This model is the medium version in the Chinese RoBERTa miniature model series, suitable for Chinese text understanding and generation tasks, particularly excelling in masked prediction tasks.

Model Features

Multiple size options
Offers 24 different parameter-scale models, ranging from ultra-small to base, catering to various computational resource needs.
Two-stage training
Adopts a two-stage training strategy of first short sequences then long sequences to enhance the model's understanding of texts of different lengths.
Public corpus training
Trained using the publicly available CLUECorpusSmall corpus, ensuring reproducible results.

Model Capabilities

Chinese text understanding
Masked language modeling
Text feature extraction

Use Cases

Text completion
Geographic knowledge completion
Completing sentences containing geographic knowledge, such as 'Beijing is the capital of [MASK] country'
Accurately predicts the word 'China' to fill the masked position.
Sentiment analysis
Comment sentiment judgment
Used to determine the sentiment tendency of user comments.
Achieves 93.4% accuracy on Chinese sentiment analysis tasks.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase