模型简介
模型特点
模型能力
使用案例
🚀 DeepSeek-R1
DeepSeek-R1是一系列推理模型,包括DeepSeek-R1-Zero和DeepSeek-R1。这些模型在数学、代码和推理任务上表现出色,性能可与OpenAI-o1相媲美。项目开源了相关模型及基于Llama和Qwen的蒸馏模型,为研究社区提供支持。
🚀 快速开始
运行模型
- 可查看所有版本获取包含GGUF、4位和原始格式的DeepSeek-R1版本。
- 在
llama.cpp
中运行此模型的详细说明可查看博客:unsloth.ai/blog/deepseek-r1。
本地运行
- DeepSeek-R1模型:请访问DeepSeek-V3仓库获取更多运行信息。
- DeepSeek-R1-Distill模型:可像使用Qwen或Llama模型一样使用。例如,使用vLLM启动服务:
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
也可使用SGLang启动服务:
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
免费微调
所有笔记本都对初学者友好!添加数据集,点击“全部运行”,即可获得快2倍的微调模型,该模型可导出为GGUF、vLLM格式或上传到Hugging Face。
Unsloth支持的模型 | 免费笔记本链接 | 性能提升 | 内存使用减少 |
---|---|---|---|
Llama-3.2 (3B) | ▶️ 在Colab上开始 | 2.4倍快 | 58%少 |
Llama-3.2 (11B vision) | ▶️ 在Colab上开始 | 2倍快 | 60%少 |
Qwen2 VL (7B) | ▶️ 在Colab上开始 | 1.8倍快 | 60%少 |
Qwen2.5 (7B) | ▶️ 在Colab上开始 | 2倍快 | 60%少 |
Llama-3.1 (8B) | ▶️ 在Colab上开始 | 2.4倍快 | 58%少 |
Phi-3.5 (mini) | ▶️ 在Colab上开始 | 2倍快 | 50%少 |
Gemma 2 (9B) | ▶️ 在Colab上开始 | 2.4倍快 | 58%少 |
Mistral (7B) | ▶️ 在Colab上开始 | 2.2倍快 | 62%少 |
注意事项
- Llama 3.2对话笔记本适用于ShareGPT ChatML / Vicuna模板。
- 文本完成笔记本适用于原始文本。DPO笔记本可复现Zephyr。
- * Kaggle有2个T4,但我们使用1个。由于开销,1个T4快5倍。
✨ 主要特性
模型介绍
- DeepSeek-R1-Zero:通过大规模强化学习(RL)训练,无需监督微调(SFT)作为初步步骤,在推理方面表现出色,但存在无尽重复、可读性差和语言混合等问题。
- DeepSeek-R1:在RL之前加入冷启动数据,解决了DeepSeek-R1-Zero的问题,并进一步提升了推理性能,在数学、代码和推理任务上的表现与OpenAI-o1相当。
模型训练
- 后训练:直接对基础模型应用强化学习(RL),无需监督微调(SFT),开发出DeepSeek-R1-Zero。该模型具有自我验证、反思和生成长思维链(CoT)的能力,是首次通过纯RL验证大语言模型推理能力的开放研究。
- 开发流程:引入开发DeepSeek-R1的流程,包括两个RL阶段和两个SFT阶段,旨在发现更好的推理模式并与人类偏好对齐。
模型蒸馏
证明了可以将大模型的推理模式蒸馏到小模型中,开源的DeepSeek-R1及其API将有助于研究社区未来蒸馏出更好的小模型。使用DeepSeek-R1生成的推理数据微调了几个在研究社区广泛使用的密集模型,评估结果表明蒸馏后的小密集模型在基准测试中表现出色。
📦 安装指南
模型下载
DeepSeek-R1模型
模型 | 总参数数量 | 激活参数数量 | 上下文长度 | 下载链接 |
---|---|---|---|---|
DeepSeek-R1-Zero | 671B | 37B | 128K | 🤗 HuggingFace |
DeepSeek-R1 | 671B | 37B | 128K | 🤗 HuggingFace |
DeepSeek-R1-Distill模型
模型 | 基础模型 | 下载链接 |
---|---|---|
DeepSeek-R1-Distill-Qwen-1.5B | Qwen2.5-Math-1.5B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-7B | Qwen2.5-Math-7B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Llama-8B | Llama-3.1-8B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-14B | Qwen2.5-14B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-32B | Qwen2.5-32B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Llama-70B | Llama-3.3-70B-Instruct | 🤗 HuggingFace |
💻 使用示例
基础用法
在llama.cpp
中运行模型的示例:
./llama.cpp/llama-cli \
--model unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF/DeepSeek-R1-Distill-Llama-8B-Q4_K_M.gguf \
--cache-type-k q8_0 \
--threads 16 \
--prompt '<|User|>What is 1+1?<|Assistant|>' \
-no-cnv
高级用法
如果有GPU(如RTX 4090)且显存为24GB,可以将多个层卸载到GPU以加快处理速度:
./llama.cpp/llama-cli \
--model unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF/DeepSeek-R1-Distill-Llama-8B-Q4_K_M.gguf \
--cache-type-k q8_0 \
--threads 16 \
--prompt '<|User|>What is 1+1?<|Assistant|>' \
--n-gpu-layers 20 \
-no-cnv
📚 详细文档
评估结果
DeepSeek-R1评估
所有模型的最大生成长度设置为32,768个令牌。对于需要采样的基准测试,使用温度为$0.6$,top-p值为$0.95$,每个查询生成64个响应以估计pass@1。
类别 | 基准测试(指标) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
---|---|---|---|---|---|---|---|
架构 | - | - | MoE | - | - | MoE | |
激活参数数量 | - | - | 37B | - | - | 37B | |
总参数数量 | - | - | 671B | - | - | 671B | |
英语 | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | 91.8 | 90.8 |
MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | 92.9 | |
MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | 84.0 | |
DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | 92.2 | |
IF-Eval (Prompt Strict) | 86.5 | 84.3 | 86.1 | 84.8 | - | 83.3 | |
GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | 75.7 | 71.5 | |
SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | 47.0 | 30.1 | |
FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | 82.5 | |
AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | 87.6 | |
ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | 92.3 | |
代码 | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | 65.9 |
Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | 96.6 | 96.3 | |
Codeforces (Rating) | 717 | 759 | 1134 | 1820 | 2061 | 2029 | |
SWE Verified (Resolved) | 50.8 | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | |
Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | 61.7 | 53.3 | |
数学 | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | 79.8 |
MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | 97.3 | |
CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | 78.8 | |
中文 | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | 92.8 |
C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | 91.8 | |
C-SimpleQA (Correct) | 55.4 | 58.7 | 68.0 | 40.3 | - | 63.7 |
蒸馏模型评估
模型 | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
---|---|---|---|---|---|---|
GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | 1820 |
QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
DeepSeek-R1-Distill-Qwen-32B | 72.6 | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
DeepSeek-R1-Distill-Llama-70B | 70.0 | 86.7 | 94.5 | 65.2 | 57.5 | 1633 |
聊天网站与API平台
- 聊天网站:可在DeepSeek官方网站chat.deepseek.com上与DeepSeek-R1聊天,并切换“DeepThink”按钮。
- API平台:在DeepSeek平台platform.deepseek.com提供与OpenAI兼容的API。
使用建议
在使用DeepSeek-R1系列模型(包括基准测试)时,建议遵循以下配置以获得预期性能:
- 将温度设置在0.5 - 0.7范围内(建议0.6),以防止无尽重复或输出不连贯。
- 避免添加系统提示;所有指令应包含在用户提示中。
- 对于数学问题,建议在提示中包含指令,如:“请逐步推理,并将最终答案放在\boxed{}中。”
- 评估模型性能时,建议进行多次测试并取平均值。
🔧 技术细节
模型架构
DeepSeek-R1和DeepSeek-R1-Zero基于DeepSeek-V3-Base训练,采用混合专家模型(MoE)架构,总参数数量为671B,激活参数数量为37B。
训练方法
直接对基础模型应用强化学习(RL),无需监督微调(SFT),开发出DeepSeek-R1-Zero。引入开发DeepSeek-R1的流程,包括两个RL阶段和两个SFT阶段,旨在发现更好的推理模式并与人类偏好对齐。
评估设置
所有模型的最大生成长度设置为32,768个令牌。对于需要采样的基准测试,使用温度为$0.6$,top-p值为$0.95$,每个查询生成64个响应以估计pass@1。
📄 许可证
本代码仓库和模型权重遵循MIT许可证。DeepSeek-R1系列支持商业使用,允许进行任何修改和衍生作品,包括但不限于蒸馏训练其他大语言模型。请注意:
- DeepSeek-R1-Distill-Qwen-1.5B、DeepSeek-R1-Distill-Qwen-7B、DeepSeek-R1-Distill-Qwen-14B和DeepSeek-R1-Distill-Qwen-32B源自Qwen-2.5系列,原许可证为Apache 2.0许可证,现在使用DeepSeek-R1精心策划的800k样本进行微调。
- DeepSeek-R1-Distill-Llama-8B源自Llama3.1-8B-Base,原许可证为llama3.1许可证。
- DeepSeek-R1-Distill-Llama-70B源自Llama3.3-70B-Instruct,原许可证为llama3.3许可证。
引用
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
联系我们
如果有任何问题,请提出问题或通过service@deepseek.com联系我们。



