đ Qwen3-8B
Qwen3-8B is a powerful causal language model with advanced features such as seamless mode switching, strong reasoning capabilities, and multilingual support, offering a high - quality conversational experience.
đ Quick Start
The code of Qwen3 has been integrated into the latest Hugging Face transformers
. We highly recommend using the latest version of transformers
. If you use transformers<4.51.0
, you'll encounter the following error:
KeyError: 'qwen3'
Here is a code snippet demonstrating how to use the model to generate content based on given inputs:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
try:
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
For deployment, you can use sglang>=0.4.6.post1
or vllm>=0.8.4
to create an OpenAI - compatible API endpoint:
For local use, applications like llama.cpp, Ollama, LMStudio, and MLX - LM have also supported Qwen3.
⨠Features
Qwen3 Highlights
Qwen3 is the latest generation of large language models in the Qwen series, offering a comprehensive suite of dense and mixture - of - experts (MoE) models. After extensive training, Qwen3 has made groundbreaking advancements in reasoning, instruction - following, agent capabilities, and multilingual support, with the following key features:
- Seamless Mode Switching: It uniquely supports seamless switching between the thinking mode (for complex logical reasoning, math, and coding) and the non - thinking mode (for efficient, general - purpose dialogue) within a single model, ensuring optimal performance across various scenarios.
- Enhanced Reasoning Capabilities: It has significantly enhanced its reasoning capabilities, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non - thinking mode) in mathematics, code generation, and commonsense logical reasoning.
- Superior Human Preference Alignment: It excels in creative writing, role - playing, multi - turn dialogues, and instruction following, delivering a more natural, engaging, and immersive conversational experience.
- Expert Agent Capabilities: It has expertise in agent capabilities, enabling precise integration with external tools in both thinking and non - thinking modes and achieving leading performance among open - source models in complex agent - based tasks.
- Multilingual Support: It supports 100+ languages and dialects with strong capabilities for multilingual instruction following and translation.
đĻ Installation
The code of Qwen3 has been included in the latest Hugging Face transformers
. You just need to install the latest version of transformers
to use Qwen3.
đģ Usage Examples
Basic Usage
The above quick - start code is a basic example of using Qwen3-8B to generate text.
Advanced Usage: Switching Between Thinking and Non - Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when enable_thinking=True
. Specifically, you can add /think
and /no_think
to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi - turn conversations.
Here is an example of a multi - turn conversation:
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
if __name__ == "__main__":
chatbot = QwenChatbot()
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
Agentic Use
Qwen3 excels in tool - calling capabilities. We recommend using [Qwen - Agent](https://github.com/QwenLM/Qwen - Agent) to make the best use of the agentic ability of Qwen3. Qwen - Agent encapsulates tool - calling templates and tool - calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen - Agent, or integrate other tools by yourself.
from qwen_agent.agents import Assistant
llm_cfg = {
'model': 'Qwen3-8B',
'model_server': 'http://localhost:8000/v1',
'api_key': 'EMPTY',
}
tools = [
{'mcpServers': {
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter',
]
bot = Assistant(llm=llm_cfg, function_list=tools)
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
đ Documentation
Model Overview
Qwen3-8B has the following features:
Property |
Details |
Model Type |
Causal Language Models |
Training Stage |
Pretraining & Post - training |
Number of Parameters |
8.2B |
Number of Paramaters (Non - Embedding) |
6.95B |
Number of Layers |
36 |
Number of Attention Heads (GQA) |
32 for Q and 8 for KV |
Context Length |
32,768 natively and 131,072 tokens with YaRN |
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.
Switching Between Thinking and Non - Thinking Mode
â ī¸ Important Note
The enable_thinking
switch is also available in APIs created by SGLang and vLLM. Please refer to our documentation for SGLang and vLLM users.
enable_thinking=True
By default, Qwen3 has thinking capabilities enabled, similar to QwQ - 32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting enable_thinking=True
or leaving it as the default value in tokenizer.apply_chat_template
, the model will engage its thinking mode.
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True
)
In this mode, the model will generate think content wrapped in a <think>...</think>
block, followed by the final response.
â ī¸ Important Note
For thinking mode, use Temperature = 0.6
, TopP = 0.95
, TopK = 20
, and MinP = 0
(the default setting in generation_config.json
). DO NOT use greedy decoding, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the Best Practices section.
enable_thinking=False
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5 - Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False
)
In this mode, the model will not generate any think content and will not include a <think>...</think>
block.
â ī¸ Important Note
For non - thinking mode, we suggest using Temperature = 0.7
, TopP = 0.8
, TopK = 20
, and MinP = 0
. For more detailed guidance, please refer to the Best Practices section.
Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the YaRN method.
YaRN is currently supported by several inference frameworks, e.g., transformers
and llama.cpp
for local use, vllm
and sglang
for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the
config.json
file, add the rope_scaling
fields:{
...,
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
For llama.cpp
, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For
vllm
, you can usevllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
For sglang
, you can usepython -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
For llama-server
from llama.cpp
, you can usellama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
â ī¸ Important Note
If you encounter the following warning
Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
please upgrade transformers>=4.51.0
.
â ī¸ Important Note
All the notable open - source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts.
We advise adding the rope_scaling
configuration only when processing long contexts is required.
It is also recommended to modify the factor
as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set factor
as 2.0.
â ī¸ Important Note
The default max_position_embeddings
in config.json
is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
đĄ Usage Tip
The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
Best Practices
To achieve optimal performance, we recommend the following settings:
- Sampling Parameters:
- For thinking mode (
enable_thinking=True
), use Temperature = 0.6
, TopP = 0.95
, TopK = 20
, and MinP = 0
. DO NOT use greedy decoding, as it can lead to performance degradation and endless repetitions.
- For non - thinking mode (
enable_thinking=False
), we suggest using Temperature = 0.7
, TopP = 0.8
, TopK = 20
, and MinP = 0
.
- For supported frameworks, you can adjust the
presence_penalty
parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
- Adequate Output Length: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
- Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
- Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- Multiple - Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the
answer
field with only the choice letter, e.g., "answer": "C"
."
- No Thinking Content in History: In multi - turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
đ§ Technical Details
The detailed technical implementation and optimization details of Qwen3-8B are described in our blog, GitHub, and Documentation.
đ License
This project is licensed under the Apache 2.0 license.
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}