C

Chinese Roberta L 4 H 512

Developed by uer
This is a Chinese pre-trained language model based on the RoBERTa architecture, with a parameter scale of 8 layers and 512 hidden units, suitable for various Chinese natural language processing tasks.
Downloads 873
Release Time : 3/2/2022

Model Overview

This model is the medium version in the Chinese RoBERTa micro-model series, adopting the Transformer architecture. It is pre-trained on Chinese corpora through masked language modeling tasks and can be used for downstream tasks such as text understanding and classification.

Model Features

Multiple size options
Offers 24 different model sizes from micro to base versions to meet various computational resource needs
Chinese optimization
Specifically pre-trained for Chinese text, excelling in Chinese NLP tasks
Two-stage training
First trained with short sequences, then fine-tuned with long sequences to enhance the model's ability to handle texts of varying lengths

Model Capabilities

Text feature extraction
Masked language modeling
Text classification
Sentiment analysis
Sentence matching
Natural language inference

Use Cases

Text understanding
Sentiment analysis
Analyze the sentiment tendency of user reviews
Achieved 93.4% accuracy on Chinese sentiment analysis tasks
News classification
Automatically classify news articles
Achieved 65.1% accuracy on CLUE news classification tasks
Language reasoning
Natural language inference
Determine the logical relationship between two sentences
Achieved 69.7% accuracy on CLUE natural language inference tasks
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase