# MLX format
Deepseek R1 0528 Qwen3 8B 6bit
MIT
A 6-bit quantized version converted from the DeepSeek-R1-0528-Qwen3-8B model, suitable for text generation tasks in the MLX framework.
Large Language Model
D
mlx-community
582
1
Josiefied Qwen3 4B Abliterated V1 6bit
This is a 6-bit quantized version of the Qwen3-4B model converted to the MLX format, suitable for text generation tasks.
Large Language Model
J
mlx-community
15
1
Qwen3 8B 4bit DWQ
Apache-2.0
Qwen3-8B-4bit-DWQ is a 4-bit quantized version of Qwen/Qwen3-8B converted to the MLX format, optimized for efficient operation on Apple devices.
Large Language Model
Q
mlx-community
306
1
Dia 1.6B 3bit
Apache-2.0
Dia-1.6B-3bit is a 3-bit quantized model converted from mlx-community/Dia-1.6B, primarily used for text-to-speech tasks.
Speech Synthesis English
D
mlx-community
44
1
Featured Recommended AI Models