C

Chinese Roberta L 12 H 768

Developed by uer
Chinese pre-trained language model based on RoBERTa architecture, with hidden layer dimension of 512 and 8 Transformer layers
Downloads 419
Release Time : 3/2/2022

Model Overview

This model is a medium-sized version in the Chinese RoBERTa micro-model series, suitable for various Chinese natural language processing tasks such as text classification, sentiment analysis, and sentence similarity calculation.

Model Features

Multiple Size Options
Offers 24 model variants with different parameter scales, from ultra-small to base models, catering to diverse computational resource needs
Two-stage Training
Employs sequence lengths of 128 and 512 for phased training, enhancing the model's ability to process texts of varying lengths
Public Corpus Training
Trained on the publicly available CLUECorpusSmall dataset, ensuring reproducible results

Model Capabilities

Chinese Text Understanding
Masked Language Modeling
Text Feature Extraction
Sentiment Analysis
Text Classification
Sentence Similarity Calculation

Use Cases

Sentiment Analysis
Product Review Sentiment Analysis
Analyzing sentiment tendencies in e-commerce platform user reviews
Achieved 93.4% accuracy on Chinese sentiment analysis tasks
Text Classification
News Classification
Topic classification for news articles
Achieved 65.1% accuracy on CLUE news classification task
Semantic Understanding
Sentence Similarity Calculation
Determining semantic similarity between two sentences
Achieved 86.5% accuracy on sentence similarity tasks
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase