模型简介
模型特点
模型能力
使用案例
🚀 DeepSeek-R1
DeepSeek-R1 是第一代推理模型,通过大规模强化学习训练,在数学、代码和推理任务中表现出色。本项目开源了相关模型,为研究社区提供支持。
🚀 快速开始
运行模型
你可以在我们的博客中查看更详细的说明:unsloth.ai/blog/deepseek-r1
在 llama.cpp 中运行此模型的说明
- 不要忘记
<ÔΩúUserÔΩú>
和<ÔΩúAssistantÔΩú>
标记!或者使用聊天模板格式化工具。 - 从 https://github.com/ggerganov/llama.cpp 获取最新的
llama.cpp
。 - 使用 Q8_0 K 量化缓存的示例:注意
-no-cnv
可禁用自动对话模式。
./llama.cpp/llama-cli \
--model unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF/DeepSeek-R1-Distill-Llama-8B-Q4_K_M.gguf \
--cache-type-k q8_0 \
--threads 16 \
--prompt '<ÔΩúUserÔΩú>What is 1+1?<ÔΩúAssistantÔΩú>' \
-no-cnv
示例输出:
<think>
Okay, so I need to figure out what 1 plus 1 is. Hmm, where do I even start? I remember from school that adding numbers is pretty basic, but I want to make sure I understand it properly.
Let me think, 1 plus 1. So, I have one item and I add another one. Maybe like a apple plus another apple. If I have one apple and someone gives me another, I now have two apples. So, 1 plus 1 should be 2. That makes sense.
Wait, but sometimes math can be tricky. Could it be something else? Like, in a different number system maybe? But I think the question is straightforward, using regular numbers, not like binary or hexadecimal or anything.
I also recall that in arithmetic, addition is combining quantities. So, if you have two quantities of 1, combining them gives you a total of 2. Yeah, that seems right.
Is there a scenario where 1 plus 1 wouldn't be 2? I can't think of any...
- 如果你有一个 24GB 的 GPU(例如 RTX 4090),你可以将多个层卸载到 GPU 以加快处理速度。如果你有多个 GPU,你可能可以卸载更多层。
./llama.cpp/llama-cli \
--model unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF/DeepSeek-R1-Distill-Llama-8B-Q4_K_M.gguf \
--cache-type-k q8_0 \
--threads 16 \
--prompt '<ÔΩúUserÔΩú>What is 1+1?<ÔΩúAssistantÔΩú>' \
--n-gpu-layers 20 \
-no-cnv
微调你自己的推理模型
我们有一个免费的 Google Colab 笔记本,可将 Llama 3.1 (8B) 转换为推理模型:https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-GRPO.ipynb
Unsloth支持的模型 | 免费笔记本 | 性能 | 内存使用 |
---|---|---|---|
GRPO with Phi - 4 (14B) | 点击开始在Colab运行 | 快 2 倍 | 减少 80% |
Llama - 3.2 (3B) | 点击开始在Colab运行 | 快 2.4 倍 | 减少 58% |
Llama - 3.2 (11B vision) | 点击开始在Colab运行 | 快 2 倍 | 减少 60% |
Qwen2 VL (7B) | 点击开始在Colab运行 | 快 1.8 倍 | 减少 60% |
Qwen2.5 (7B) | 点击开始在Colab运行 | 快 2 倍 | 减少 60% |
Llama - 3.1 (8B) | 点击开始在Colab运行 | 快 2.4 倍 | 减少 58% |
Phi - 3.5 (mini) | 点击开始在Colab运行 | 快 2 倍 | 减少 50% |
Gemma 2 (9B) | 点击开始在Colab运行 | 快 2.4 倍 | 减少 58% |
Mistral (7B) | 点击开始在Colab运行 | 快 2.2 倍 | 减少 62% |
- Llama 3.2 对话笔记本 适用于 ShareGPT ChatML / Vicuna 模板。
- 文本完成笔记本 适用于原始文本。DPO 笔记本 可复制 Zephyr。
- * Kaggle 有 2 个 T4,但我们使用 1 个。由于开销,1 个 T4 快 5 倍。
✨ 主要特性
模型介绍
我们推出了第一代推理模型 DeepSeek - R1 - Zero 和 DeepSeek - R1。
- DeepSeek - R1 - Zero:通过大规模强化学习(RL)训练,无需监督微调(SFT)作为初步步骤,在推理方面表现出色,具有自我验证、反思和生成长思维链等能力。但存在无尽重复、可读性差和语言混合等问题。
- DeepSeek - R1:在 RL 之前加入冷启动数据,解决了 DeepSeek - R1 - Zero 的问题,并进一步提高了推理性能。在数学、代码和推理任务中,其性能与 OpenAI - o1 相当。
模型训练
- 后训练:在基础模型上进行大规模强化学习,直接对基础模型应用 RL,无需 SFT 作为初步步骤,开发出 DeepSeek - R1 - Zero。这是首次通过纯 RL 激励大语言模型推理能力的公开研究。
- 蒸馏:将大模型的推理模式蒸馏到小模型中,开源了 DeepSeek - R1 - Zero、DeepSeek - R1 以及六个基于 Llama 和 Qwen 从 DeepSeek - R1 蒸馏的密集模型。
📦 安装指南
文档未提及具体安装命令,暂不提供安装指南。
💻 使用示例
文档未提供使用示例相关代码,暂不提供使用示例。
📚 详细文档
模型下载
DeepSeek - R1 模型
模型 | 总参数数量 | 激活参数数量 | 上下文长度 | 下载链接 |
---|---|---|---|---|
DeepSeek - R1 - Zero | 671B | 37B | 128K | [点击下载 - HuggingFace](https://huggingface.co/deepseek - ai/DeepSeek - R1 - Zero) |
DeepSeek - R1 | 671B | 37B | 128K | [点击下载 - HuggingFace](https://huggingface.co/deepseek - ai/DeepSeek - R1) |
DeepSeek - R1 - Distill 模型
模型 | 基础模型 | 下载链接 |
---|---|---|
DeepSeek - R1 - Distill - Qwen - 1.5B | [Qwen2.5 - Math - 1.5B](https://huggingface.co/Qwen/Qwen2.5 - Math - 1.5B) | [点击下载 - HuggingFace](https://huggingface.co/deepseek - ai/DeepSeek - R1 - Distill - Qwen - 1.5B) |
DeepSeek - R1 - Distill - Qwen - 7B | [Qwen2.5 - Math - 7B](https://huggingface.co/Qwen/Qwen2.5 - Math - 7B) | [点击下载 - HuggingFace](https://huggingface.co/deepseek - ai/DeepSeek - R1 - Distill - Qwen - 7B) |
DeepSeek - R1 - Distill - Llama - 8B | [Llama - 3.1 - 8B](https://huggingface.co/meta - llama/Llama - 3.1 - 8B) | [点击下载 - HuggingFace](https://huggingface.co/deepseek - ai/DeepSeek - R1 - Distill - Llama - 8B) |
DeepSeek - R1 - Distill - Qwen - 14B | [Qwen2.5 - 14B](https://huggingface.co/Qwen/Qwen2.5 - 14B) | [点击下载 - HuggingFace](https://huggingface.co/deepseek - ai/DeepSeek - R1 - Distill - Qwen - 14B) |
DeepSeek - R1 - Distill - Qwen - 32B | [Qwen2.5 - 32B](https://huggingface.co/Qwen/Qwen2.5 - 32B) | [点击下载 - HuggingFace](https://huggingface.co/deepseek - ai/DeepSeek - R1 - Distill - Qwen - 32B) |
DeepSeek - R1 - Distill - Llama - 70B | [Llama - 3.3 - 70B - Instruct](https://huggingface.co/meta - llama/Llama - 3.3 - 70B - Instruct) | [点击下载 - HuggingFace](https://huggingface.co/deepseek - ai/DeepSeek - R1 - Distill - Llama - 70B) |
评估结果
DeepSeek - R1 评估
类别 | 基准测试(指标) | Claude - 3.5 - Sonnet - 1022 | GPT - 4o 0513 | DeepSeek V3 | OpenAI o1 - mini | OpenAI o1 - 1217 | DeepSeek R1 |
---|---|---|---|---|---|---|---|
架构 | - | - | MoE | - | - | MoE | |
激活参数数量 | - | - | 37B | - | - | 37B | |
总参数数量 | - | - | 671B | - | - | 671B | |
英语 | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | 91.8 | 90.8 |
MMLU - Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | 92.9 | |
MMLU - Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | 84.0 | |
DROP (3 - shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | 92.2 | |
IF - Eval (Prompt Strict) | 86.5 | 84.3 | 86.1 | 84.8 | - | 83.3 | |
GPQA - Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | 75.7 | 71.5 | |
SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | 47.0 | 30.1 | |
FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | 82.5 | |
AlpacaEval2.0 (LC - winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | 87.6 | |
ArenaHard (GPT - 4 - 1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | 92.3 | |
代码 | LiveCodeBench (Pass@1 - COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | 65.9 |
Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | 96.6 | 96.3 | |
Codeforces (Rating) | 717 | 759 | 1134 | 1820 | 2061 | 2029 | |
SWE Verified (Resolved) | 50.8 | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | |
Aider - Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | 61.7 | 53.3 | |
数学 | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | 79.8 |
MATH - 500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | 97.3 | |
CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | 78.8 | |
中文 | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | 92.8 |
C - Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | 91.8 | |
C - SimpleQA (Correct) | 55.4 | 58.7 | 68.0 | 40.3 | - | 63.7 |
蒸馏模型评估
模型 | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH - 500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
---|---|---|---|---|---|---|
GPT - 4o - 0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
Claude - 3.5 - Sonnet - 1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
o1 - mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | 1820 |
QwQ - 32B - Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
DeepSeek - R1 - Distill - Qwen - 1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
DeepSeek - R1 - Distill - Qwen - 7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
DeepSeek - R1 - Distill - Qwen - 14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
DeepSeek - R1 - Distill - Qwen - 32B | 72.6 | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
DeepSeek - R1 - Distill - Llama - 8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
DeepSeek - R1 - Distill - Llama - 70B | 70.0 | 86.7 | 94.5 | 65.2 | 57.5 | 1633 |
聊天网站与 API 平台
- 聊天网站:你可以在 DeepSeek 的官方网站 chat.deepseek.com 上与 DeepSeek - R1 聊天,并切换“DeepThink”按钮。
- API 平台:我们在 DeepSeek 平台 platform.deepseek.com 上提供与 OpenAI 兼容的 API。
本地运行
DeepSeek - R1 模型
请访问 [DeepSeek - V3](https://github.com/deepseek - ai/DeepSeek - V3) 仓库,了解有关在本地运行 DeepSeek - R1 的更多信息。
DeepSeek - R1 - Distill 模型
DeepSeek - R1 - Distill 模型可以像 Qwen 或 Llama 模型一样使用。
- 使用 [vLLM](https://github.com/vllm - project/vllm) 启动服务:
vllm serve deepseek - ai/DeepSeek - R1 - Distill - Qwen - 32B --tensor - parallel - size 2 --max - model - len 32768 --enforce - eager
- 使用 [SGLang](https://github.com/sgl - project/sglang) 启动服务:
python3 -m sglang.launch_server --model deepseek - ai/DeepSeek - R1 - Distill - Qwen - 32B --trust - remote - code --tp 2
使用建议
在使用 DeepSeek - R1 系列模型时,包括进行基准测试,建议遵循以下配置以获得预期性能:
- 将温度设置在 0.5 - 0.7 范围内(建议 0.6),以防止无尽重复或输出不连贯。
- 避免添加系统提示;所有指令应包含在用户提示中。
- 对于数学问题,建议在提示中包含指令,例如:“请逐步推理,并将最终答案放在 \boxed{} 内。”
- 评估模型性能时,建议进行多次测试并取平均值。
🔧 技术细节
文档未提供具体技术细节描述,暂不提供技术细节。
📄 许可证
本代码仓库和模型权重遵循 [MIT 许可证](https://github.com/deepseek - ai/DeepSeek - R1/blob/main/LICENSE)。
- DeepSeek - R1 - Distill - Qwen - 1.5B、DeepSeek - R1 - Distill - Qwen - 7B、DeepSeek - R1 - Distill - Qwen - 14B 和 DeepSeek - R1 - Distill - Qwen - 32B 源自 Qwen - 2.5 系列,原许可证为 [Apache 2.0 许可证](https://huggingface.co/Qwen/Qwen2.5 - 1.5B/blob/main/LICENSE),现在使用 DeepSeek - R1 策划的 800k 样本进行微调。
- DeepSeek - R1 - Distill - Llama - 8B 源自 Llama3.1 - 8B - Base,原许可证为 [llama3.1 许可证](https://huggingface.co/meta - llama/Llama - 3.1 - 8B/blob/main/LICENSE)。
- DeepSeek - R1 - Distill - Llama - 70B 源自 Llama3.3 - 70B - Instruct,原许可证为 [llama3.3 许可证](https://huggingface.co/meta - llama/Llama - 3.3 - 70B - Instruct/blob/main/LICENSE)。
引用
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
联系我们
如果你有任何问题,请提出问题或通过 service@deepseek.com 联系我们。
特别感谢
非常感谢 DeepSeek 团队创建并发布这些模型。
⚠️ 重要提示
在本地运行 DeepSeek - R1 系列模型之前,建议查看使用建议部分。



