# MNN inference optimization
Qwen3 4B MNN
Apache-2.0
The 4-bit quantized version of the MNN model for Qwen3-4B, used for efficient text generation tasks
Large Language Model English
Q
taobao-mnn
10.60k
2
Qwen2 0.5B Instruct MNN
Apache-2.0
Qwen2-0.5B-Instruct-MNN is a 4-bit quantized version of the MNN model exported from Qwen2-0.5B-Instruct, suitable for text generation and chat scenarios.
Large Language Model English
Q
taobao-mnn
880
1
Featured Recommended AI Models