C

Chinese Roberta L 2 H 512

Developed by uer
A Chinese RoBERTa model pre-trained on CLUECorpusSmall, featuring 8 layers and 512-dimensional hidden layers, suitable for various Chinese NLP tasks.
Downloads 37
Release Time : 3/2/2022

Model Overview

This is a Chinese pre-trained language model based on the RoBERTa architecture, supporting tasks like masked language modeling, and can be used for text feature extraction and fine-tuning downstream NLP tasks.

Model Features

Multiple model sizes
Offers 24 different model sizes ranging from ultra-small (2 layers/128 dimensions) to base (12 layers/768 dimensions).
Chinese optimization
Specially optimized for Chinese text using the CLUECorpusSmall dataset.
Two-stage training
First trained with short sequences (128), then fine-tuned with long sequences (512) to enhance model performance.

Model Capabilities

Chinese text understanding
Masked language modeling
Text feature extraction
Downstream task fine-tuning

Use Cases

Sentiment analysis
Product review sentiment analysis
Analyze sentiment tendencies in user reviews on e-commerce platforms
Achieved 94.8% accuracy on Chinese sentiment analysis tasks
Text classification
News classification
Automatically classify news articles
Achieved 65.6% accuracy on CLUE news classification tasks
Natural language inference
Textual entailment recognition
Determine logical relationships between sentences
Achieved 71.2% accuracy on CLUE natural language inference tasks
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase