Model Overview
Model Features
Model Capabilities
Use Cases
đ Qwen/QwQ-32B (Quantized)
This is a quantized version of the original model Qwen/QwQ-32B
, offering enhanced efficiency.
đ Quick Start
This model is a quantized version of the original model Qwen/QwQ-32B
. It's quantized using the BitsAndBytes library to 4-bit using the bnb-my-repo space.
⨠Features
- Quantized Model: A 4-bit quantized version of the original
Qwen/QwQ-32B
model, achieved through the BitsAndBytes library. - Enhanced Reasoning: QwQ series models, especially QwQ-32B, show strong reasoning capabilities, outperforming in downstream tasks.
- Competitive Performance: Can achieve competitive results against state - of - the - art reasoning models.
- Long Context Support: Supports a full 131,072 tokens context length.
đĻ Installation
Not provided in the original document, so this section is skipped.
đģ Usage Examples
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r's are in the word \"strawberry\""
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Advanced Usage
- Enforce Thoughtful Output: Ensure the model starts with "<think>\n" to prevent generating empty thinking content. When using
apply_chat_template
withadd_generation_prompt=True
, this is mostly handled automatically, but the response may lack the<think>
tag at the start, which is normal. - Sampling Parameters:
- Use Temperature = 0.6, TopP = 0.95, MinP = 0 instead of Greedy decoding to avoid endless repetitions.
- Use TopK between 20 and 40 to filter out rare token occurrences and maintain output diversity.
- For supported frameworks, adjust the
presence_penalty
parameter between 0 and 2 to reduce repetitions, but higher values may cause language mixing and a slight performance decrease.
- No Thinking Content in History: In multi - turn conversations, historical model output should only include the final output, not the thinking content. This is implemented in
apply_chat_template
. - Standardize Output Format:
- Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- Multiple - Choice Questions: Add "Please show your choice in the
answer
field with only the choice letter, e.g.,\"answer\": \"C\"
." to the prompt.
- Handle Long Inputs: For inputs over 8,192 tokens, enable YaRN. Add the following to
config.json
to enable it:
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
For deployment, vLLM is recommended. Refer to Documentation for usage. Note that vLLM only supports static YARN, which may impact performance on shorter texts. Add the rope_scaling
configuration only for long contexts.
đ Documentation
QwQ-32B Introduction
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction - tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ - 32B is the medium - sized reasoning model, which can achieve competitive performance against state - of - the - art reasoning models, e.g., DeepSeek - R1, o1 - mini.
This repo contains the QwQ 32B model, with the following features:
Property | Details |
---|---|
Model Type | Causal Language Models |
Training Stage | Pretraining & Post - training (Supervised Finetuning and Reinforcement Learning) |
Architecture | transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias |
Number of Parameters | 32.5B |
Number of Paramaters (Non - Embedding) | 31.0B |
Number of Layers | 64 |
Number of Attention Heads (GQA) | 40 for Q and 8 for KV |
Context Length | Full 131,072 tokens. For prompts over 8,192 tokens, enable YaRN as described here. |
You can try our demo or access QwQ models via QwenChat. For more details, refer to our blog, GitHub, and Documentation.
Evaluation & Performance
Detailed evaluation results are reported in this đ blog. For requirements on GPU memory and the respective throughput, see results here.
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qwq32b,
title = {QwQ-32B: Embracing the Power of Reinforcement Learning},
url = {https://qwenlm.github.io/blog/qwq-32b/},
author = {Qwen Team},
month = {March},
year = {2025}
}
@article{qwen2.5,
title={Qwen2.5 Technical Report},
author={An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu},
journal={arXiv preprint arXiv:2412.15115},
year={2024}
}
đ License
This model is licensed under the apache - 2.0 license.
â ī¸ Important Note
For the best experience, please review the usage guidelines before deploying QwQ models.
For inputs exceeding 8,192 tokens, enable YaRN to improve the model's ability to capture long - sequence information effectively. Presently, vLLM only supports static YARN, which may impact performance on shorter texts. Add the
rope_scaling
configuration only when processing long contexts is required.

