C

Chinese Roberta L 6 H 768

Developed by uer
A Chinese RoBERTa medium model pre-trained on CLUECorpusSmall, with 8 layers of network and a 512-dimensional hidden layer, suitable for various Chinese NLP tasks.
Downloads 222
Release Time : 3/2/2022

Model Overview

This is a pre-trained Chinese RoBERTa model trained with the masked language modeling objective, which can be used for natural language processing tasks such as text feature extraction, text classification, and sentiment analysis.

Model Features

Multi-size model selection
Provide 24 models with different parameter scales from ultra-small to basic, to meet different computing resource requirements
Chinese optimization
Specifically optimized for Chinese text, performing excellently in Chinese NLP tasks
Two-stage training
Use two sequence lengths of 128 and 512 for phased training to improve the model's ability to process texts of different lengths

Model Capabilities

Text feature extraction
Masked language modeling
Text classification
Sentiment analysis
Text matching
Natural language inference

Use Cases

Sentiment analysis
Sentiment analysis of product reviews
Analyze the sentiment tendency of product reviews on e-commerce platforms
Achieve an accuracy of 93.4% in Chinese sentiment analysis tasks
Text classification
News classification
Automatically classify news articles
Achieve an accuracy of 65.1% in the CLUE news classification task
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase