C

Chinese Roberta L 6 H 512

Developed by uer
A medium version of the Chinese RoBERTa model series pre-trained by UER-py, trained on CLUECorpusSmall corpus, suitable for various Chinese NLP tasks.
Downloads 19
Release Time : 3/2/2022

Model Overview

This is a Chinese pre-trained language model based on the RoBERTa architecture, featuring 8 layers and 512 hidden dimensions, supporting tasks like masked language modeling.

Model Features

Multiple size options
Offers 24 different model configurations, from ultra-small to base sizes, catering to diverse computational resource needs.
Chinese optimization
Specifically pre-trained for Chinese text, achieving excellent performance on the CLUE benchmark.
Two-stage training
Adopts a two-stage training strategy (short sequences followed by long sequences) to enhance model effectiveness.

Model Capabilities

Chinese text understanding
Masked language modeling
Text feature extraction

Use Cases

Text understanding
Sentiment analysis
Analyze the sentiment tendency of Chinese text
Achieves 93.4% accuracy on Chinese sentiment analysis tasks
News classification
Classify Chinese news text
Achieves 65.1% accuracy on CLUE news classification tasks
Language reasoning
Natural language inference
Determine logical relationships between sentences
Achieves 69.7% accuracy on CLUE natural language inference tasks
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase