Q

Qwen2 72B Instruct 2.0bpw H Novel Exl2

Developed by Orion-zhen
The next-generation 72B-parameter large language model from Tongyi Qianwen, supporting 131K long text processing, with excellent performance in language understanding, text generation, programming, and mathematical reasoning
Downloads 21
Release Time : 6/12/2024

Model Overview

The instruction-tuned version of the Qwen2 series with 72B parameters, built on Transformer architecture, supporting ultra-long text processing and multilingual interaction

Model Features

Extended Context Support
Expanded to a 131,072-token context window via YARN technology, capable of handling long documents and complex dialogues
Exceptional Multi-Domain Performance
Outperforms similar open-source models in academic benchmarks like MMLU and GPQA, as well as programming and mathematical reasoning tasks
Quantization Adaptation
Offers a 2-bit quantized version that runs on consumer-grade GPUs with 24GB VRAM, optimized for novel generation performance

Model Capabilities

Long text comprehension and generation
Multi-turn dialogue
Code generation and explanation
Mathematical problem solving
Multilingual translation
Knowledge Q&A

Use Cases

Content Creation
Novel Generation
Generates coherent long-form narratives using the quantized version
Optimized on the pixiv novel dataset, reducing domain perplexity
Intelligent Assistant
Knowledge Q&A System
Deployed as an enterprise knowledge base interaction frontend
Achieved 83.8 points on C-Eval Chinese evaluation
Education
Programming Tutoring
Real-time code explanation and error correction
Scored 86.0 on HumanEval benchmark
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase