D

Deepseek R1 Distill Qwen 32B 4bit

Developed by mlx-community
This is the MLX 4-bit quantized version of the DeepSeek-R1-Distill-Qwen-32B model, designed for efficient inference on Apple silicon devices
Downloads 130.79k
Release Time : 1/21/2025

Model Overview

A 32B-parameter large language model distilled and optimized from Qwen-32B, converted with 4-bit quantization to run under the MLX framework

Model Features

MLX optimization
4-bit quantized version specifically optimized for Apple silicon, enabling efficient operation on Mac devices
Distilled model
Distilled version based on Qwen-32B, maintaining performance while improving inference efficiency
Chinese optimization
Specially optimized for Chinese text processing

Model Capabilities

Text generation
Dialogue interaction
Knowledge Q&A
Text summarization

Use Cases

Intelligent assistant
Chatbot
Building Chinese dialogue assistants
Smooth Chinese conversation experience
Content generation
Article creation
Assisting in Chinese content creation
Generating coherent Chinese text
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase