đ WizardLM-2-8x22B
WizardLM-2-8x22B is a state - of - the - art large language model. It shows excellent performance in complex chat, multilingual, reasoning and agent tasks, outperforming many existing open - source models.
đ Quick Start
For the WizardLM-2-7B re-upload, see here.
⨠Features
News đĨđĨđĨ [2024/04/15]
We introduce and open - source WizardLM-2, our next - generation state - of - the - art large language models. They have improved performance on complex chat, multilingual, reasoning and agent tasks. The new family includes three cutting - edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model. It demonstrates highly competitive performance compared to leading proprietary works and consistently outperforms all existing state - of - the - art open - source models.
- WizardLM-2 70B reaches top - tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger open - source leading models.
For more details of WizardLM-2, please read our release blog post and upcoming paper.
đ Documentation
Model Details
Model Capacities
MT - Bench
We also adopt the automatic MT - Bench evaluation framework based on GPT - 4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top - performing models among the other leading baselines at 7B to 70B model scales.
Human Preferences Evaluation
We carefully collected a complex and challenging set consisting of real - world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT - 4 - 1106 - preview, and significantly stronger than Command R Plus and GPT4 - 0314.
- WizardLM-2 70B is better than GPT4 - 0613, Mistral - Large, and Qwen1.5 - 72B - Chat.
- WizardLM-2 7B is comparable with Qwen1.5 - 32B - Chat, and surpasses Qwen1.5 - 14B - Chat and Starling - LM - 7B - beta.
Method Overview
We built a fully AI powered synthetic training system to train WizardLM-2 models. Please refer to our blog for more details of this system.
Usage
â ī¸ Important Note
This is about the model system prompts usage.
WizardLM-2 adopts the prompt format from Vicuna and supports multi - turn conversation. The prompt should be as following:
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
Inference WizardLM-2 Demo Script
We provide a WizardLM-2 inference demo code on our github.
Detailed results can be found here
Metric |
Value |
Avg. |
32.61 |
IFEval (0 - Shot) |
52.72 |
BBH (3 - Shot) |
48.58 |
MATH Lvl 5 (4 - Shot) |
22.28 |
GPQA (0 - shot) |
17.56 |
MuSR (0 - shot) |
14.54 |
MMLU - PRO (5 - shot) |
39.96 |
đ WizardLM-2 Release Blog
đ¤ HF Repo âĸđą Github Repo âĸ đĻ Twitter âĸ đ [WizardLM] âĸ đ [WizardCoder] âĸ đ [WizardMath]
đ Join our Discord
đ License
The model is licensed under Apache2.0.