C

Chinese Roberta L 4 H 256

Developed by uer
A Chinese RoBERTa model pre-trained on CLUECorpusSmall with 8 layers and 512 hidden units, suitable for various Chinese NLP tasks.
Downloads 70
Release Time : 3/2/2022

Model Overview

This model is the medium version in the Chinese RoBERTa mini-model series, adopting the RoBERTa architecture, suitable for tasks like masked language modeling and text feature extraction.

Model Features

Multi-stage Training
Initially trained for 1,000,000 steps with a sequence length of 128, followed by an additional 250,000 steps with a sequence length of 512 to optimize model performance.
Chinese Optimization
Specially optimized for Chinese text, trained on the CLUECorpusSmall corpus, and excels in Chinese language tasks.
Multiple Size Options
Offers 24 different parameter sizes from ultra-small to base models to meet various computational resource needs.

Model Capabilities

Chinese Text Understanding
Masked Language Modeling
Text Feature Extraction
Downstream Task Fine-tuning

Use Cases

Text Understanding
Chinese Sentiment Analysis
Can be used to analyze the sentiment tendency of Chinese text.
Achieved 93.4% accuracy in Chinese sentiment analysis tasks.
News Classification
Can be used for Chinese news text classification.
Achieved 65.1% accuracy in news classification tasks.
Language Reasoning
Natural Language Inference
Can be used for logical reasoning tasks with Chinese text.
Achieved 69.7% accuracy in natural language inference tasks.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase