# 4-bit GPTQ quantization
Devstral Small 2505.w4a16 Gptq
Apache-2.0
This is a 4-bit GPTQ quantized version based on the mistralai/Devstral-Small-2505 model, optimized for consumer-grade hardware.
Large Language Model
Safetensors
D
mratsim
557
1
Qwq 32B Gptqmodel 4bit Vortex V1
Apache-2.0
QwQ-32B is a 32B-parameter large language model based on the Qwen2 architecture, processed with 4-bit integer quantization using the GPTQ method, suitable for efficient text generation tasks.
Large Language Model
Safetensors English
Q
ModelCloud
1,620
11
Featured Recommended AI Models