Model Overview
Model Features
Model Capabilities
Use Cases
đ Qwen2.5-7B-Instruct-1M GGUF Models
This project offers Qwen2.5-7B-Instruct-1M GGUF models, featuring an ultra-low-bit quantization method with IQ-DynamicGate (1 - 2 bit). It significantly enhances model performance in memory efficiency and accuracy, making it suitable for various hardware environments and application scenarios.
⨠Features
- Ultra-Low-Bit Quantization: Introduces precision-adaptive quantization for 1 - 2 bit models, improving accuracy and efficiency.
- Multiple Model Formats: Provides various model formats (BF16, F16, Quantized Models, etc.) to meet different hardware and memory requirements.
- Advanced Inference Framework: Develops an advanced inference framework based on vLLM, enhancing long-sequence processing performance.
- AI Network Monitoring Testing: Offers an AI-powered network monitor assistant for testing small open-source models in network monitoring scenarios.
đĻ Installation
The code of Qwen2.5 is included in the latest Hugging Face transformers
. It is recommended to use the latest version of transformers
. If you encounter the error KeyError: 'qwen2'
with transformers<4.37.0
, please update to the latest version.
đģ Usage Examples
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-7B-Instruct-1M"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Advanced Usage - Processing Ultra Long Texts
1. System Preparation
- GPU: Recommended to use GPUs with Ampere or Hopper architecture.
- CUDA Version: 12.1 or 12.3
- Python Version: >=3.9 and <=3.12
- VRAM Requirements:
- Qwen2.5-7B-Instruct-1M: At least 120GB VRAM (total across GPUs) for 1 million-token sequences.
- Qwen2.5-14B-Instruct-1M: At least 320GB VRAM (total across GPUs) for 1 million-token sequences.
2. Install Dependencies
git clone -b dev/dual-chunk-attn git@github.com:QwenLM/vllm.git
cd vllm
pip install -e . -v
3. Launch vLLM
Offline Inference Example
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
# Initialize the tokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct-1M")
# Pass the default decoding hyperparameters of Qwen2.5-7B-Instruct
# max_tokens is for the maximum length for generation.
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=512)
# Input the model name or path. See below for parameter explanation (after the example of openai-like server).
llm = LLM(model="Qwen/Qwen2.5-7B-Instruct-1M",
tensor_parallel_size=4,
max_model_len=1010000,
enable_chunked_prefill=True,
max_num_batched_tokens=131072,
enforce_eager=True,
# quantization="fp8", # Enabling FP8 quantization for model weights can reduce memory usage.
)
# Prepare your prompts
prompt = "Tell me something about large language models."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# generate outputs
outputs = llm.generate([text], sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
Openai-like Server Example
vllm serve Qwen/Qwen2.5-7B-Instruct-1M \
--tensor-parallel-size 4 \
--max-model-len 1010000 \
--enable-chunked-prefill --max-num-batched-tokens 131072 \
--enforce-eager \
--max-num-seqs 1
# --quantization fp8 # Enabling FP8 quantization for model weights can reduce memory usage.
đ Documentation
Ultra-Low-Bit Quantization with IQ-DynamicGate (1 - 2 bit)
Our latest quantization method uses precision-adaptive quantization for ultra-low-bit models (1 - 2 bit), proven effective on Llama-3-8B. It uses layer-specific strategies to balance accuracy and memory efficiency.
Benchmark Context
All tests were conducted on Llama-3-8B-Instruct with a standard perplexity evaluation pipeline, a 2048-token context window, and the same prompt set for all quantizations.
Method
- Dynamic Precision Allocation:
- First/Last 25% of layers â IQ4_XS (selected layers)
- Middle 50% â IQ2_XXS/IQ3_S (increase efficiency)
- Critical Component Protection:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% compared to standard 1 - 2 bit
Quantization Performance Comparison (Llama-3-8B)
Quantization | Standard PPL | DynamicGate PPL | Î PPL | Std Size | DG Size | Î Size | Std Speed | DG Speed |
---|---|---|---|---|---|---|---|---|
IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
Key Improvements:
- đĨ IQ1_M shows a massive 43.9% perplexity reduction (27.46 â 15.41)
- đ IQ2_S cuts perplexity by 36.9% while adding only 0.2GB
- ⥠IQ1_S maintains 39.7% better accuracy despite 1-bit quantization
Tradeoffs:
- All variants have modest size increases (0.1 - 0.3GB)
- Inference speeds remain comparable (<5% difference)
When to Use These Models
- Fitting models into GPU VRAM
- Memory-constrained deployments
- CPU and Edge Devices where 1 - 2 bit errors can be tolerated
- Research into ultra-low-bit quantization
Choosing the Right Model Format
Selecting the correct model format depends on your hardware capabilities and memory constraints.
Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
---|---|---|---|---|
BF16 | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
F16 | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
Q4_K | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
Q6_K | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
Q8_0 | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
IQ3_XS | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
Q4_0 | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
Included Files & Details
Qwen2.5-7B-Instruct-1M-bf16.gguf
: Model weights in BF16. Use for requantization or if your device supports BF16 acceleration.Qwen2.5-7B-Instruct-1M-f16.gguf
: Model weights in F16. Use if your device supports FP16, especially if BF16 is not available.Qwen2.5-7B-Instruct-1M-bf16-q8_0.gguf
: Output & embeddings in BF16, other layers quantized to Q8_0. Use if your device supports BF16 and you want a quantized version.Qwen2.5-7B-Instruct-1M-f16-q8_0.gguf
: Output & embeddings in F16, other layers quantized to Q8_0.Qwen2.5-7B-Instruct-1M-q4_k.gguf
: Output & embeddings quantized to Q8_0, other layers quantized to Q4_K. Good for CPU inference with limited memory.Qwen2.5-7B-Instruct-1M-q4_k_s.gguf
: Smallest Q4_K variant, using less memory at the cost of accuracy. Best for very low-memory setups.Qwen2.5-7B-Instruct-1M-q6_k.gguf
: Output & embeddings quantized to Q8_0, other layers quantized to Q6_K.Qwen2.5-7B-Instruct-1M-q8_0.gguf
: Fully Q8 quantized model for better accuracy. Requires more memory but offers higher precision.Qwen2.5-7B-Instruct-1M-iq3_xs.gguf
: IQ3_XS quantization, optimized for extreme memory efficiency. Best for ultra-low-memory devices.Qwen2.5-7B-Instruct-1M-iq3_m.gguf
: IQ3_M quantization, offering a medium block size for better accuracy. Suitable for low-memory devices.Qwen2.5-7B-Instruct-1M-q4_0.gguf
: Pure Q4_0 quantization, optimized for ARM devices. Best for low-memory environments. Prefer IQ4_NL for better accuracy.
Testing the AI-Powered Network Monitor Assistant
If you find these models useful, please click "Like"! Help test the AI-Powered Network Monitor Assistant with quantum-ready security checks at Free Network Monitor.
How to test
- Click the chat icon (bottom right on any page)
- Choose an AI assistant type:
TurboLLM
(GPT-4-mini)FreeLLM
(Open-source)TestLLM
(Experimental CPU-only)
What Iâm Testing
Pushing the limits of small open-source models for AI network monitoring, specifically:
- Function calling against live network services
- How small can a model go while still handling:
- Automated Nmap scans
- Quantum-readiness checks
- Metasploit integration
Other Assistants
- đĸ TurboLLM: Uses gpt-4-mini for real-time network diagnostics and automated penetration testing (Nmap/Metasploit). Get more tokens by downloading our Free Network Monitor Agent.
- đĩ HugLLM: Open-source models (â8B params), 2x more tokens than TurboLLM, AI-powered log analysis, runs on Hugging Face Inference API.
Example AI Commands to Test
"Give me info on my websites SSL certificate"
"Check if my server is using quantum safe encyption for communication"
"Run a quick Nmap vulnerability test"
đ§ Technical Details
The Qwen2.5-1M model is the long-context version of the Qwen2.5 series, supporting a context length of up to 1M tokens. It has the following technical details:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 1,010,000 tokens and generation 8192 tokens
To enhance long-sequence processing, an advanced inference framework based on vLLM is developed, incorporating sparse attention and length extrapolation.
đ License
This project is licensed under the Apache-2.0 License.
Citation
If you find our work helpful, please cite us:
@misc{qwen2.5-1m,
title = {Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens},
url = {https://qwenlm.github.io/blog/qwen2.5-1m/},
author = {Qwen Team},
month = {January},
year = {2025}
}
@article{qwen2.5,
title={Qwen2.5-1M Technical Report},
author={An Yang and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoyan Huang and Jiandong Jiang and Jianhong Tu and Jianwei Zhang and Jingren Zhou and Junyang Lin and Kai Dang and Kexin Yang and Le Yu and Mei Li and Minmin Sun and Qin Zhu and Rui Men and Tao He and Weijia Xu and Wenbiao Yin and Wenyuan Yu and Xiafei Qiu and Xingzhang Ren and Xinlong Yang and Yong Li and Zhiying Xu and Zipeng Zhang},
journal={arXiv preprint arXiv:2501.15383},
year={2025}
}

