# Chinese dialogue optimization
Mlabonne Qwen3 1.7B Abliterated GGUF
This is a quantized version based on the Qwen3-1.7B-abliterated model, using llama.cpp for quantization, supporting multiple quantization types, suitable for text generation tasks.
Large Language Model
M
bartowski
1,493
2
Rwkv Raven 1b5
RWKV is a large language model that combines the advantages of RNN and Transformer, supporting efficient training and fast inference with unlimited context length processing capability.
Large Language Model
Transformers

R
RWKV
428
12
Featured Recommended AI Models