Rbtl3
This is a retrained three-layer RoBERTa-wwm-ext-large model, a Chinese pre-trained BERT model employing whole word masking strategy, aimed at accelerating the development of Chinese natural language processing.
Downloads 767
Release Time : 3/2/2022
Model Overview
This model is a Chinese pre-trained model based on the RoBERTa architecture, trained with a whole word masking strategy, suitable for various Chinese natural language processing tasks.
Model Features
Whole Word Masking Strategy
Employs whole word masking strategy for pre-training, enhancing the model's understanding of Chinese text.
Three-layer Architecture
Retrained three-layer architecture optimizes model performance and efficiency.
Chinese Optimization
Specifically optimized for Chinese text, suitable for Chinese natural language processing tasks.
Model Capabilities
Text Classification
Named Entity Recognition
Text Generation
Question Answering System
Use Cases
Natural Language Processing
Chinese Text Classification
Used for classifying Chinese text, such as sentiment analysis, topic classification, etc.
Named Entity Recognition
Identifies named entities in Chinese text, such as person names, place names, organization names, etc.
Featured Recommended AI Models
Š 2025AIbase