C

Chinese Roberta L 6 H 256

Developed by uer
A Chinese RoBERTa model pre-trained on CLUECorpusSmall, with a parameter scale of 8 layers and 512 hidden units.
Downloads 58
Release Time : 3/2/2022

Model Overview

This model is the medium-sized version of the Chinese RoBERTa series, suitable for various Chinese natural language processing tasks, such as text classification, sentiment analysis, text matching, etc.

Model Features

Multiple size options
Provide 24 models with different parameter scales to meet different computing resource requirements.
Optimized for Chinese
Specifically pre-trained and optimized for Chinese text.
Two-stage training
First train with short sequences, then fine-tune with long sequences to improve model performance.

Model Capabilities

Text feature extraction
Masked language modeling
Text classification
Sentiment analysis
Text matching
Natural language inference

Use Cases

Sentiment analysis
Sentiment analysis of product reviews
Analyze the sentiment tendency of product reviews on e-commerce platforms.
Achieve 94.8% accuracy on Chinese sentiment analysis tasks.
Text classification
News classification
Automatically classify news articles.
Achieve 65.6% accuracy on the CLUE news classification task.
Semantic understanding
Text matching
Judge the semantic similarity between two texts.
Achieve 88.1% accuracy on text matching tasks.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase