模型简介
模型特点
模型能力
使用案例
🚀 DeepSeek-R1
DeepSeek-R1是第一代推理模型,在数学、代码和推理任务上取得了与OpenAI-o1相当的性能。Unsloth Dynamic v2.0在量化方面表现出色,其DeepSeek-R1的1.58-bit + 2-bit动态量化有选择性地进行量化,大大提高了准确性。本项目还提供了模型的运行说明、评估结果、使用建议等内容,助力相关研究。
🚀 快速开始
运行模型
在llama.cpp
中运行此模型的说明如下,你也可以在unsloth.ai/blog/deepseekr1-dynamic查看更详细的说明:
- 不要忘记
<|User|>
和<|Assistant|>
标记!或者使用聊天模板格式化工具。 - 从https://github.com/ggerganov/llama.cpp 获取最新的
llama.cpp
。你也可以按照以下构建说明操作:
apt-get update
apt-get install build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggerganov/llama.cpp
cmake llama.cpp -B llama.cpp/build \
-DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp
- 最好使用
--min-p 0.05
来抵消非常罕见的令牌预测,特别是对于1.58位模型,此设置效果良好。 - 通过以下方式下载模型:
# pip install huggingface_hub hf_transfer
# import os # Optional for faster downloading
# os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
repo_id = "unsloth/DeepSeek-R1-GGUF",
local_dir = "DeepSeek-R1-GGUF",
allow_patterns = ["*UD-IQ1_S*"], # Select quant type UD-IQ1_S for 1.58bit
)
- 使用Q8_0 K量化缓存的示例(注意
-no-cnv
禁用自动对话模式)
./llama.cpp/llama-cli \
--model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
--cache-type-k q8_0 \
--threads 12 -no-cnv --prio 2 \
--temp 0.6 \
--ctx-size 8192 \
--seed 3407 \
--prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|>"
示例输出:
<think>
Okay, so I need to figure out what 1 plus 1 is. Hmm, where do I even start? I remember from school that adding numbers is pretty basic, but I want to make sure I understand it properly.
Let me think, 1 plus 1. So, I have one item and I add another one. Maybe like a apple plus another apple. If I have one apple and someone gives me another, I now have two apples. So, 1 plus 1 should be 2. That makes sense.
Wait, but sometimes math can be tricky. Could it be something else? Like, in a different number system maybe? But I think the question is straightforward, using regular numbers, not like binary or hexadecimal or anything.
I also recall that in arithmetic, addition is combining quantities. So, if you have two quantities of 1, combining them gives you a total of 2. Yeah, that seems right.
Is there a scenario where 1 plus 1 wouldn't be 2? I can't think of any...
- 如果你有一个24GB的GPU(例如RTX 4090),可以将多个层卸载到GPU以加快处理速度。如果你有多个GPU,可以卸载更多层。
./llama.cpp/llama-cli \
--model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
--cache-type-k q8_0 \
--threads 12 -no-cnv --prio 2 \
--n-gpu-layers 7 \
--temp 0.6 \
--ctx-size 8192 \
--seed 3407 \
--prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|>"
- 如果你想合并权重,可以使用以下脚本:
./llama.cpp/llama-gguf-split --merge \
DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
merged_file.gguf
微调模型
我们有一个免费的Google Colab笔记本,可将Llama 3.1 (8B) 转换为推理模型:点击查看
所有笔记本都对初学者友好!添加你的数据集,点击“全部运行”,你将获得一个速度快2倍的微调模型,该模型可以导出为GGUF、vLLM格式或上传到Hugging Face。
Unsloth支持的模型 | 免费笔记本链接 | 性能提升 | 内存使用减少 |
---|---|---|---|
GRPO with Phi-4 (14B) | ▶️ 点击在Colab上开始 | 2倍速度提升 | 减少80% |
Llama-3.2 (3B) | ▶️ 点击在Colab上开始 | 2.4倍速度提升 | 减少58% |
Llama-3.2 (11B vision) | ▶️ 点击在Colab上开始 | 2倍速度提升 | 减少60% |
Qwen2 VL (7B) | ▶️ 点击在Colab上开始 | 1.8倍速度提升 | 减少60% |
Qwen2.5 (7B) | ▶️ 点击在Colab上开始 | 2倍速度提升 | 减少60% |
Llama-3.1 (8B) | ▶️ 点击在Colab上开始 | 2.4倍速度提升 | 减少58% |
Phi-3.5 (mini) | ▶️ 点击在Colab上开始 | 2倍速度提升 | 减少50% |
Gemma 2 (9B) | ▶️ 点击在Colab上开始 | 2.4倍速度提升 | 减少58% |
Mistral (7B) | ▶️ 点击在Colab上开始 | 2.2倍速度提升 | 减少62% |
注意事项
- Llama 3.2对话笔记本适用于ShareGPT ChatML / Vicuna模板。
- 文本完成笔记本适用于原始文本。DPO笔记本可复制Zephyr。
- *Kaggle有2个T4 GPU,但我们使用1个。由于开销问题,1个T4 GPU的速度快5倍。
✨ 主要特性
Unsloth Dynamic v2.0优势
- 高精度:Unsloth Dynamic v2.0实现了卓越的准确性,优于其他领先的量化方法。
- 选择性量化:Unsloth的DeepSeek-R1 1.58位 + 2位动态量化进行了选择性量化,大大提高了相对于标准1位/2位量化的准确性。
DeepSeek-R1模型特性
- 推理能力强:DeepSeek-R1在数学、代码和推理任务上取得了与OpenAI-o1相当的性能。
- 开源共享:开源了DeepSeek-R1-Zero、DeepSeek-R1以及从DeepSeek-R1蒸馏得到的六个基于Llama和Qwen的密集模型,其中DeepSeek-R1-Distill-Qwen-32B在各种基准测试中优于OpenAI-o1-mini,为密集模型创造了新的最优结果。
模型训练与优化
- 强化学习:直接对基础模型应用强化学习(RL),无需监督微调(SFT)作为初步步骤,开发出DeepSeek-R1-Zero,该模型展示了自我验证、反思和生成长思维链(CoT)等能力。
- 多阶段训练:开发DeepSeek-R1的管道包含两个RL阶段和两个SFT阶段,旨在发现更好的推理模式并与人类偏好对齐。
📦 安装指南
运行环境准备
请参考上述“快速开始”部分的步骤,完成llama.cpp
的安装、模型的下载等操作。
模型运行设置
在运行DeepSeek-R1系列模型时,建议遵循以下配置以获得预期性能:
- 将温度设置在0.5 - 0.7范围内(建议0.6),以防止无限重复或输出不连贯。
- 避免添加系统提示,所有指令应包含在用户提示中。
- 对于数学问题,建议在提示中包含指令,如“请逐步推理,并将最终答案放在\boxed{}内”。
- 评估模型性能时,建议进行多次测试并取平均值。
📚 详细文档
模型介绍
我们推出了第一代推理模型DeepSeek-R1-Zero和DeepSeek-R1。DeepSeek-R1-Zero通过大规模强化学习(RL)训练,无需监督微调(SFT)作为初步步骤,在推理方面表现出色,但存在无尽重复、可读性差和语言混合等问题。为解决这些问题并进一步提升推理性能,我们引入了DeepSeek-R1,它在RL之前加入了冷启动数据。
模型训练
- 强化学习:直接对基础模型应用RL,开发出DeepSeek-R1-Zero,该模型展示了自我验证、反思和生成长CoT等能力,是首次通过纯RL激励大语言模型(LLM)推理能力的公开研究。
- 多阶段训练:开发DeepSeek-R1的管道包含两个RL阶段和两个SFT阶段,旨在发现更好的推理模式并与人类偏好对齐。
模型下载
DeepSeek-R1模型
模型名称 | 总参数数量 | 激活参数数量 | 上下文长度 | 下载链接 |
---|---|---|---|---|
DeepSeek-R1-Zero | 671B | 37B | 128K | 🤗 HuggingFace |
DeepSeek-R1 | 671B | 37B | 128K | 🤗 HuggingFace |
DeepSeek-R1-Distill模型
模型名称 | 基础模型 | 下载链接 |
---|---|---|
DeepSeek-R1-Distill-Qwen-1.5B | Qwen2.5-Math-1.5B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-7B | Qwen2.5-Math-7B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Llama-8B | Llama-3.1-8B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-14B | Qwen2.5-14B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-32B | Qwen2.5-32B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Llama-70B | Llama-3.3-70B-Instruct | 🤗 HuggingFace |
评估结果
DeepSeek-R1评估
所有模型的最大生成长度设置为32,768个令牌。对于需要采样的基准测试,使用温度为0.6、top-p值为0.95,并为每个查询生成64个响应以估计pass@1。
类别 | 基准测试(指标) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
---|---|---|---|---|---|---|---|
架构 | - | - | MoE | - | - | MoE | |
激活参数数量 | - | - | 37B | - | - | 37B | |
总参数数量 | - | - | 671B | - | - | 671B | |
英语 | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | 91.8 | 90.8 |
MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | 92.9 | |
MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | 84.0 | |
DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | 92.2 | |
IF-Eval (Prompt Strict) | 86.5 | 84.3 | 86.1 | 84.8 | - | 83.3 | |
GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | 75.7 | 71.5 | |
SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | 47.0 | 30.1 | |
FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | 82.5 | |
AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | 87.6 | |
ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | 92.3 | |
代码 | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | 65.9 |
Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | 96.6 | 96.3 | |
Codeforces (Rating) | 717 | 759 | 1134 | 1820 | 2061 | 2029 | |
SWE Verified (Resolved) | 50.8 | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | |
Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | 61.7 | 53.3 | |
数学 | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | 79.8 |
MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | 97.3 | |
CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | 78.8 | |
中文 | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | 92.8 |
C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | 91.8 | |
C-SimpleQA (Correct) | 55.4 | 58.7 | 68.0 | 40.3 | - | 63.7 |
蒸馏模型评估
模型 | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
---|---|---|---|---|---|---|
GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | 1820 |
QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
DeepSeek-R1-Distill-Qwen-32B | 72.6 | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
DeepSeek-R1-Distill-Llama-70B | 70.0 | 86.7 | 94.5 | 65.2 | 57.5 | 1633 |
模型使用
在线聊天与API平台
- 在线聊天:你可以在DeepSeek的官方网站chat.deepseek.com上与DeepSeek-R1聊天,并切换“DeepThink”按钮。
- API平台:我们在DeepSeek平台platform.deepseek.com提供了与OpenAI兼容的API。
本地运行
- DeepSeek-R1模型:请访问DeepSeek-V3仓库获取更多关于在本地运行DeepSeek-R1的信息。
- DeepSeek-R1-Distill模型:可以像使用Qwen或Llama模型一样使用DeepSeek-R1-Distill模型。例如,你可以使用vLLM轻松启动服务:
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
也可以使用SGLang启动服务:
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
🔧 技术细节
模型训练技术
- 强化学习应用:直接对基础模型应用强化学习,无需监督微调作为初步步骤,使模型能够探索思维链(CoT)来解决复杂问题。
- 多阶段训练策略:开发DeepSeek-R1的管道包含两个RL阶段和两个SFT阶段,通过不同阶段的训练,发现更好的推理模式并与人类偏好对齐。
模型量化技术
Unsloth的DeepSeek-R1采用1.58位 + 2位动态量化,进行选择性量化,提高了相对于标准1位/2位量化的准确性。
模型评估设置
所有模型的最大生成长度设置为32,768个令牌。对于需要采样的基准测试,使用温度为0.6、top-p值为0.95,并为每个查询生成64个响应以估计pass@1。
📄 许可证
本代码仓库和模型权重遵循MIT许可证。DeepSeek-R1系列支持商业使用,允许进行任何修改和衍生作品,包括但不限于蒸馏训练其他大语言模型。需要注意的是:
- DeepSeek-R1-Distill-Qwen-1.5B、DeepSeek-R1-Distill-Qwen-7B、DeepSeek-R1-Distill-Qwen-14B和DeepSeek-R1-Distill-Qwen-32B源自Qwen-2.5系列,原许可证为Apache 2.0许可证,现在使用DeepSeek-R1生成的800k样本进行了微调。
- DeepSeek-R1-Distill-Llama-8B源自Llama3.1-8B-Base,原许可证为llama3.1许可证。
- DeepSeek-R1-Distill-Llama-70B源自Llama3.3-70B-Instruct,原许可证为llama3.3许可证。
致谢
非常感谢DeepSeek团队创建并发布这些模型。
引用
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
联系我们
如果您有任何问题,请提出问题或通过service@deepseek.com联系我们。



