Chinese Electra 180g Base Generator
ELECTRA model trained on 180G Chinese data, with small parameters but superior performance, suitable for Chinese natural language processing tasks
Downloads 28
Release Time : 3/2/2022
Model Overview
This model is one of the Chinese ELECTRA series released by the HIT-iFLYTEK Joint Lab, adopting the ELECTRA architecture. Compared to BERT, it has a smaller size and higher efficiency, suitable for various Chinese NLP tasks.
Model Features
Efficient Pre-training
Uses ELECTRA's replaced token detection task, which is more efficient than BERT's masked language model training.
Compact Size with High Performance
Parameters are only 1/10 of BERT, but achieve similar or even better performance in multiple NLP tasks.
Chinese Optimization
Specially optimized for Chinese language characteristics, trained on 180G Chinese data.
Model Capabilities
Text Understanding
Text Classification
Named Entity Recognition
Question Answering System
Text Similarity Calculation
Use Cases
Natural Language Processing
Sentiment Analysis
Analyze the sentiment tendency of user reviews
Accuracy comparable to BERT but with faster inference speed
Intelligent Customer Service
Used to build Chinese question-answering systems
Effectively understands user intent and provides accurate responses
Featured Recommended AI Models
Š 2025AIbase