# Chinese Text Generation

Qwen3 235B A22B Mixed 3 6bit
Apache-2.0
This is a mixed 3-6bit quantized version converted from the Qwen/Qwen3-235B-A22B model, optimized for efficient inference on the Apple MLX framework.
Large Language Model
Q
mlx-community
100
2
Qwen3 8B Q4 K M GGUF
Apache-2.0
This is the GGUF format version of the Qwen3-8B model, suitable for the llama.cpp framework and supports text generation tasks.
Large Language Model Transformers
Q
ufoym
342
3
Mlabonne Qwen3 8B Abliterated GGUF
This is the quantized version of the Qwen3-8B-abliterated model, quantized using llama.cpp, suitable for text generation tasks.
Large Language Model
M
bartowski
6,892
5
Qwen3 8B Bf16
Apache-2.0
Qwen3-8B-bf16 is an MLX format model converted from Qwen/Qwen3-8B, supporting text generation tasks.
Large Language Model
Q
mlx-community
1,658
1
Qwen3 0.6B 4bit
Apache-2.0
This is a 4-bit quantized version converted from the Qwen/Qwen3-0.6B model, suitable for efficient inference on the MLX framework.
Large Language Model
Q
mlx-community
6,015
5
Qwen3 8B MLX 8bit
Apache-2.0
An 8-bit quantized large language model in MLX format converted from Qwen/Qwen3-8B, suitable for text generation tasks
Large Language Model
Q
lmstudio-community
63.46k
2
Qwen Qwen3 4B GGUF
The Llamacpp imatrix quantization version of Qwen3-4B provided by the Qwen team, supporting multiple quantization types and suitable for text generation tasks.
Large Language Model
Q
bartowski
10.58k
9
Doge 20M Chinese
Apache-2.0
The Doge model employs dynamic masked attention mechanisms for sequence transformation, with the option to use either multi-layer perceptrons or cross-domain mixture of experts for state transitions.
Large Language Model Transformers Supports Multiple Languages
D
wubingheng
65
2
Deepseek R1 ReDistill Qwen 7B V1.1 Q8 0 GGUF
MIT
This model is converted from DeepSeek-R1-ReDistill-Qwen-7B-v1.1 into GGUF format, suitable for text generation tasks.
Large Language Model
D
NikolayKozloff
44
2
Llama 3.1 0x Mini Q8 0 GGUF
This is a GGUF format model converted from ozone-ai/llama-3.1-0x-mini, suitable for the llama.cpp framework.
Large Language Model
L
NikolayKozloff
19
1
Gpt2 Xlarge Chinese Cluecorpussmall
A lightweight Chinese GPT2 model pre-trained on CLUECorpusSmall, featuring a 6-layer architecture optimized for Chinese text generation tasks
Large Language Model Transformers Chinese
G
uer
315
5
Gpt2 Medium Chinese Cluecorpussmall
A lightweight Chinese GPT2 model pre-trained on CLUECorpusSmall, featuring 6-layer/768-dim architecture optimized for Chinese text generation
Large Language Model Transformers Chinese
G
uer
863
3
Randeng T5 77M
Apache-2.0
A lightweight Chinese version of mT5-small model specialized in natural language transformation tasks
Large Language Model Transformers Chinese
R
IDEA-CCNL
104
3
Wenzhong GPT2 110M
Apache-2.0
Chinese version of GPT2-Small model specialized in natural language generation tasks
Large Language Model Transformers Chinese
W
IDEA-CCNL
2,478
28
Gpt2 Distil Chinese Cluecorpussmall
A lightweight Chinese GPT2 model pre-trained on CLUECorpusSmall, with 6 layers/768 hidden units, suitable for Chinese text generation tasks
Large Language Model Chinese
G
uer
1,043
20
Cpt Large
A pre-trained unbalanced Transformer model for Chinese understanding and generation, supporting various natural language processing tasks
Large Language Model Transformers Chinese
C
fnlp
122
16
Gpt2 Chinese Cluecorpussmall
Chinese GPT2-distil model, pretrained on CLUECorpusSmall dataset, suitable for Chinese text generation tasks
Large Language Model Chinese
G
uer
41.45k
207
Bart Base Chinese
A pre-trained asymmetric Transformer model for Chinese understanding and generation, supporting text-to-text generation tasks
Large Language Model Transformers Chinese
B
fnlp
6,504
99
Gpt2 Chinese Poem
A Chinese classical poetry generation model based on the GPT2 architecture, pre-trained by UER-py, capable of generating Chinese classical poetry.
Large Language Model Chinese
G
uer
1,905
38
Bart Large Chinese
A Chinese pre-trained model based on the BART architecture, supporting text generation and understanding tasks, released by Fudan University's Natural Language Processing Laboratory
Large Language Model Transformers Chinese
B
fnlp
638
55
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase