Qwen3 1.7B 4bit
Qwen3-1.7B-4bit is a 4-bit quantized version of the Tongyi Qianwen 1.7B model, which has been converted to the MLX framework format for efficient operation on Apple Silicon devices.
Downloads 11.85k
Release Time : 4/28/2025
Model Overview
This model is a 4-bit quantized version based on the Qwen3-1.7B large language model, optimized for the MLX framework and supports efficient text generation tasks.
Model Features
MLX framework optimization
Optimized for Apple Silicon devices, leveraging the MLX framework for efficient inference
4-bit quantization
Reduces model memory usage through 4-bit quantization technology while maintaining good generation quality
Dialogue template support
Built-in chat templates for easy construction of dialogue systems
Model Capabilities
Text generation
Dialogue system
Content creation
Use Cases
Dialogue system
Intelligent customer service
Build an intelligent customer service system based on local devices
Content creation
Text-assisted creation
Assist in the creation and generation of various types of text content
Featured Recommended AI Models