C

Chinese Roberta L 2 H 256

Developed by uer
Chinese RoBERTa model pre-trained on CLUECorpusSmall, featuring 8 layers and 512 hidden dimensions, suitable for various Chinese NLP tasks.
Downloads 26
Release Time : 3/2/2022

Model Overview

This is a medium-sized version from the Chinese RoBERTa model series pre-trained by the UER-py framework, specifically optimized for Chinese text processing, supporting masked language modeling and text feature extraction.

Model Features

Multiple model sizes
Offers 24 different configurations (from 2 layers/128 hidden units to 12 layers/768 hidden units) to meet various computational resource requirements
Efficient pre-training
Adopts a two-phase training strategy (first 128 then 512 sequence length), achieving better performance on CLUECorpusSmall than larger corpora
Chinese optimization
Specially designed and optimized for Chinese characteristics, excelling in tasks like sentiment analysis and text matching

Model Capabilities

Masked language modeling
Text feature extraction
Chinese text understanding
Downstream task fine-tuning

Use Cases

Sentiment analysis
Product review sentiment analysis
Analyze sentiment tendencies in e-commerce platform user reviews
Achieved 94.8% accuracy on Chinese sentiment analysis tasks
Text matching
QA system matching
Calculate semantic similarity between questions and candidate answers
Achieved 88.1% accuracy on text matching tasks
Text classification
News classification
Automatic classification of Chinese news articles
Achieved 65.6% accuracy on CLUE news classification tasks
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase