C

Chinese Roberta L 4 H 768

Developed by uer
One of the 24 Chinese RoBERTa model series pre-trained on CLUECorpusSmall, trained using the UER-py framework, supporting masked language modeling and text feature extraction.
Downloads 17
Release Time : 3/2/2022

Model Overview

This model is the medium version of the Chinese RoBERTa series, featuring 8 layers and 512 hidden dimensions, suitable for various Chinese natural language processing tasks.

Model Features

Multi-size Options
Offers 24 different model configurations ranging from ultra-small to base sizes to meet various computational resource needs
Chinese Optimization
Specifically pre-trained for Chinese text, achieving excellent performance on CLUE benchmarks
Two-stage Training
Adopts a two-stage training strategy with sequence lengths of 128 and 512 to enhance model performance

Model Capabilities

Text feature extraction
Masked language prediction
Chinese text understanding

Use Cases

Text Understanding
Sentiment Analysis
Analyze the sentiment tendency of user reviews
Achieves 94.8% accuracy on Chinese sentiment analysis tasks
Text Classification
Classify news or applications
Achieves 65.6% accuracy on CLUE news classification tasks
Semantic Understanding
Sentence Matching
Determine the semantic similarity between two sentences
Achieves 88.1% accuracy on sentence matching tasks
Natural Language Inference
Determine logical relationships between texts
Achieves 71.2% accuracy on CLUE natural language inference tasks
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase