Model Overview
Model Features
Model Capabilities
Use Cases
đ QwenLong-L1-32B GGUF Models
This project offers QwenLong-L1-32B GGUF models, which are optimized for various hardware and memory constraints. These models are suitable for long-context reasoning tasks, with features like ultra-low-bit quantization and multiple model formats to choose from.
⨠Features
- Ultra-Low-Bit Quantization: Introduces precision-adaptive quantization for ultra-low-bit models (1 - 2 bit), improving performance on Llama-3-8B.
- Multiple Model Formats: Supports BF16, F16, and various quantized formats (Q4_K, Q6_K, etc.) to meet different hardware and memory requirements.
- Long-Context Reasoning: Trained with reinforcement learning for long-context reasoning, outperforming many flagship LRMs.
đĻ Installation
# Create the conda environment
conda create -n qwenlongl1 python==3.10
conda activate qwenlongl1
# Install requirements
pip3 install -r requirements.txt
# Install verl
cd verl
pip3 install -e .
# Install vLLM
pip3 install vllm==0.7.3
# Install flash-attn
pip3 install flash-attn --no-build-isolation
đģ Usage Examples
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Tongyi-Zhiwen/QwenLong-L1-32B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
template = """Please read the following text and answer the question below.
<text>
$DOC$
</text>
$Q$
Format your response as follows: "Therefore, the answer is (insert answer here)"."""
context = "<YOUR_CONTEXT_HERE>"
question = "<YOUR_QUESTION_HERE>"
prompt = template.replace('$DOC$', context.strip()).replace('$Q$', question.strip())
messages = [
# {"role": "system", "content": "You are QwenLong-L1, created by Alibaba Tongyi Lab. You are a helpful assistant."}, # Use system prompt to define identity when needed.
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=10000,
temperature=0.7,
top_p=0.95
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151649 (</think>)
index = len(output_ids) - output_ids[::-1].index(151649)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
đ Documentation
Model Generation Details
This model was generated using llama.cpp at commit f5cd27b7
.
Ultra-Low-Bit Quantization with IQ-DynamicGate (1 - 2 bit)
Our latest quantization method introduces precision-adaptive quantization for ultra-low-bit models (1 - 2 bit), with benchmark-proven improvements on Llama-3-8B. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
Benchmark Context
All tests conducted on Llama-3-8B-Instruct using:
- Standard perplexity evaluation pipeline
- 2048-token context window
- Same prompt set across all quantizations
Method
- Dynamic Precision Allocation:
- First/Last 25% of layers â IQ4_XS (selected layers)
- Middle 50% â IQ2_XXS/IQ3_S (increase efficiency)
- Critical Component Protection:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% vs standard 1 - 2bit
Quantization Performance Comparison (Llama-3-8B)
Quantization | Standard PPL | DynamicGate PPL | Î PPL | Std Size | DG Size | Î Size | Std Speed | DG Speed |
---|---|---|---|---|---|---|---|---|
IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
Key:
- PPL = Perplexity (lower is better)
- Î PPL = Percentage change from standard to DynamicGate
- Speed = Inference time (CPU avx2, 2048 token context)
- Size differences reflect mixed quantization overhead
Key Improvements:
- IQ1_M shows massive 43.9% perplexity reduction (27.46 â 15.41)
- IQ2_S cuts perplexity by 36.9% while adding only 0.2GB
- IQ1_S maintains 39.7% better accuracy despite 1-bit quantization
Tradeoffs:
- All variants have modest size increases (0.1 - 0.3GB)
- Inference speeds remain comparable (<5% difference)
When to Use These Models
- Fitting models into GPU VRAM
- Memory-constrained deployments
- Cpu and Edge Devices where 1 - 2bit errors can be tolerated
- Research into ultra-low-bit quantization
Choosing the Right Model Format
Selecting the correct model format depends on your hardware capabilities and memory constraints.
BF16 (Brain Float 16) â Use if BF16 acceleration is available
- A 16-bit floating-point format designed for faster computation while retaining good precision.
- Provides similar dynamic range as FP32 but with lower memory usage.
- Recommended if your hardware supports BF16 acceleration (check your device's specs).
- Ideal for high-performance inference with reduced memory footprint compared to FP32.
Use BF16 if:
- Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
- You want higher precision while saving memory.
- You plan to requantize the model into another format.
Avoid BF16 if:
- Your hardware does not support BF16 (it may fall back to FP32 and run slower).
- You need compatibility with older devices that lack BF16 optimization.
F16 (Float 16) â More widely supported than BF16
- A 16-bit floating-point high precision but with less of range of values than BF16.
- Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
Use F16 if:
- Your hardware supports FP16 but not BF16.
- You need a balance between speed, memory usage, and accuracy.
- You are running on a GPU or another device optimized for FP16 computations.
Avoid F16 if:
- Your device lacks native FP16 support (it may run slower than expected).
- You have memory limitations.
Quantized Models (Q4_K, Q6_K, Q8, etc.) â For CPU & Low-VRAM Inference
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- Lower-bit models (Q4_K) â Best for minimal memory usage, may have lower precision.
- Higher-bit models (Q6_K, Q8_0) â Better accuracy, requires more memory.
Use Quantized Models if:
- You are running inference on a CPU and need an optimized model.
- Your device has low VRAM and cannot load full-precision models.
- You want to reduce memory footprint while keeping reasonable accuracy.
Avoid Quantized Models if:
- You need maximum accuracy (full-precision models are better for this).
- Your hardware has enough VRAM for higher-precision formats (BF16/F16).
Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)
These models are optimized for extreme memory efficiency, making them ideal for low-power devices or large-scale deployments where memory is a critical constraint.
- IQ3_XS: Ultra-low-bit quantization (3-bit) with extreme memory efficiency.
- Use case: Best for ultra-low-memory devices where even Q4_K is too large.
- Trade-off: Lower accuracy compared to higher-bit quantizations.
- IQ3_S: Small block size for maximum memory efficiency.
- Use case: Best for low-memory devices where IQ3_XS is too aggressive.
- IQ3_M: Medium block size for better accuracy than IQ3_S.
- Use case: Suitable for low-memory devices where IQ3_S is too limiting.
- Q4_K: 4-bit quantization with block-wise optimization for better accuracy.
- Use case: Best for low-memory devices where Q6_K is too large.
- Q4_0: Pure 4-bit quantization, optimized for ARM devices.
- Use case: Best for ARM-based devices or low-memory environments.
Summary Table: Model Format Selection
Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
---|---|---|---|---|
BF16 | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
F16 | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
Q4_K | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
Q6_K | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
Q8_0 | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
IQ3_XS | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
Q4_0 | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
Included Files & Details
QwenLong-L1-32B-bf16.gguf
: Model weights preserved in BF16. Use this if you want to requantize the model into a different format. Best if your device supports BF16 acceleration.QwenLong-L1-32B-f16.gguf
: Model weights stored in F16. Use if your device supports FP16, especially if BF16 is not available.QwenLong-L1-32B-bf16-q8_0.gguf
: Output & embeddings remain in BF16. All other layers quantized to Q8_0. Use if your device supports BF16 and you want a quantized version.QwenLong-L1-32B-f16-q8_0.gguf
: Output & embeddings remain in F16. All other layers quantized to Q8_0.QwenLong-L1-32B-q4_k.gguf
: Output & embeddings quantized to Q8_0. All other layers quantized to Q4_K. Good for CPU inference with limited memory.QwenLong-L1-32B-q4_k_s.gguf
: Smallest Q4_K variant, using less memory at the cost of accuracy. Best for very low-memory setups.QwenLong-L1-32B-q6_k.gguf
: Output & embeddings quantized to Q8_0. All other layers quantized to Q6_K.QwenLong-L1-32B-q8_0.gguf
: Fully Q8 quantized model for better accuracy. Requires more memory but offers higher precision.QwenLong-L1-32B-iq3_xs.gguf
: IQ3_XS quantization, optimized for extreme memory efficiency. Best for ultra-low-memory devices.QwenLong-L1-32B-iq3_m.gguf
: IQ3_M quantization, offering a medium block size for better accuracy. Suitable for low-memory devices.QwenLong-L1-32B-q4_0.gguf
: Pure Q4_0 quantization, optimized for ARM devices. Best for low-memory environments. Prefer IQ4_NL for better accuracy.
Testing the Models
If you find these models useful, please click "Like"! Help me test my AI-Powered Network Monitor Assistant with quantum-ready security checks: Free Network Monitor
How to test
Choose an AI assistant type:
TurboLLM
(GPT-4o-mini)HugLLM
(Hugginface Open-source)TestLLM
(Experimental CPU-only)
What I'm Testing
I'm pushing the limits of small open-source models for AI network monitoring, specifically:
- Function calling against live network services
- How small can a model go while still handling:
- Automated Nmap scans
- Quantum-readiness checks
- Network Monitoring tasks
TestLLM â Current experimental model (llama.cpp on 2 CPU threads)
- Zero-configuration setup
- ~30s load time (slow inference but no API costs)
- Help wanted! If you're into edge-device AI, let's collaborate!
Other Assistants
- TurboLLM â Uses gpt-4o-mini for:
- Create custom cmd processors to run .net code on Free Network Monitor Agents
- Real-time network diagnostics and monitoring
- Security Audits
- Penetration testing (Nmap/Metasploit)
- Get more tokens by logging in or downloading our Free Network Monitor Agent with integrated AI Assistant
- HugLLM â Latest Open-source models:
- Runs on Hugging Face Inference API
Example commands to you could test
"Give me info on my websites SSL certificate"
"Check if my server is using quantum safe encyption for communication"
"Run a comprehensive security audit on my server"
- '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
News
- May 28, 2025: We release QwenLong-L1-32B-AWQ, which has undergone AWQ int4 quantization using the ms-swift framework.
- May 26, 2025: We release QwenLong-L1-32B, which is the first long-context LRM trained with reinforcement learning for long-context reasoning. Experiments on seven long-context DocQA benchmarks demonstrate that QwenLong-L1-32B outperforms flagship LRMs like OpenAI-o3-mini and Qwen3-235B-A22B, achieving performance on par with Claude-3.7-Sonnet-Thinking, demonstrating leading performance among state-of-the-art LRMs.
- May 26, 2025: We release DocQA-RL-1.6K, which is a specialized RL training dataset comprising 1.6K document question answering (DocQA) problems spanning mathematical, logical, and multi-hop reasoning domains.
Introduction
In this work, we propose QwenLong-L1, a novel reinforcement learning (RL) framework designed to facilitate the transition of LRMs from short-context proficiency to robust long-context generalization. In our preliminary experiments, we illustrate the differences between the training dynamics of short-context and long-context reasoning RL.
Our framework enhances short-context LRMs through progressive context scaling during RL training. The framework comprises three core components: a warm-up supervised fine-tuning (SFT) phase to initialize a robust policy, a curriculum-guided RL phase that facilitates stable adaptation from short to long contexts, and a difficulty-aware retrospective sampling mechanism that adjusts training complexity across stages to incentivize policy exploration. Leveraging recent RL algorithms, including GRPO and DAPO, our framework integrates hybrid reward functions combining rule-based and model-based binary outcome rewards to balance precision and recall. Through strategic utilization of group relative advantages during policy optimization, it guides LRMs to learn effective reasoning patterns essential for robust long-context grounding and superior reasoning capabilities.
Model Release
We release QwenLong-L1-32B, which is the first long-context LRM trained with reinforcement learniing for long-context reasoning. Experiments on seven long-context DocQA benchmarks demonstrate that QwenLong-L1-32B outperforms flagship LRMs like OpenAI-o3-mini and Qwen3-235B-A22B, achieving performance on par with Claude-3.7-Sonnet-Thinking, demonstrating leading performance among state-of-the-art LRMs.
Here are the evaluation results.
Processing Long Documents
For input where the total length (including both input and output) significantly exceeds 32,768 tokens, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the YaRN method.
YaRN is currently supported by several inference frameworks, e.g., transformers
and llama.cpp
for local use, vllm
and sglang
for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
Modifying the model files
In the config.json
file, add the rope_scaling
fields:
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
For llama.cpp
, you need to regenerate the GGUF file after the modification.
Passing command line arguments
For vllm
, you can use
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
For sglang
, you can use
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
For llama-server
from llama.cpp
, you can use
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
â ī¸ Important Note
If you encounter the following warning
Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
please upgrade
transformers
.
đ License
This project is licensed under the Apache-2.0 license.

