C4 Zh Distilbert Base Uncased
C
C4 Zh Distilbert Base Uncased
Developed by liam168
A Chinese text classification model based on the DistilBERT architecture, trained on 4 categories: ['Women', 'Sports', 'Literature', 'Campus'].
Downloads 21
Release Time : 3/2/2022
Model Overview
This model is a Chinese text classification model primarily used to classify text into four predefined categories: Women, Sports, Literature, and Campus.
Model Features
Lightweight Model
Based on the DistilBERT architecture, it retains most of BERT's performance while reducing model size and computational resource requirements.
Chinese Text Classification
Specifically designed for classifying Chinese text, supporting four predefined categories.
Efficient Inference
Due to the DistilBERT architecture, the model maintains high accuracy while offering faster inference speeds.
Model Capabilities
Chinese Text Classification
Sentiment Analysis
Use Cases
Content Classification
Social Media Content Classification
Automatically classify Chinese text on social media into Women, Sports, Literature, or Campus categories.
High accuracy, suitable for content recommendation systems.
News Classification
Classify Chinese news articles for easier content management and retrieval.
Featured Recommended AI Models