# Low-precision and efficient inference
Mistral Small 3.2 24B Instruct 2506 Bf16
Apache-2.0
This is an MLX format model converted from Mistral-Small-3.2-24B-Instruct-2506, suitable for instruction following tasks.
Large Language Model Supports Multiple Languages
M
mlx-community
163
1
UI TARS 1.5 7B 6bit
Apache-2.0
UI-TARS-1.5-7B-6bit is a vision-language model converted based on the MLX format, supporting image understanding and text generation tasks.
Image-to-Text
Transformers Supports Multiple Languages

U
mlx-community
1,110
3
Deepseek R1 GGUF
MIT
DeepSeek-R1 is a 1.58-bit dynamically quantized large language model optimized by Unsloth, adopting the MoE architecture and supporting English task processing.
Large Language Model English
D
unsloth
2.0M
1,045
Featured Recommended AI Models