模型概述
模型特點
模型能力
使用案例
🚀 DeepSeek-R1
DeepSeek-R1是一系列推理模型,包括DeepSeek-R1-Zero和DeepSeek-R1。這些模型在數學、代碼和推理任務上表現出色,性能可與OpenAI-o1相媲美。項目開源了相關模型及基於Llama和Qwen的蒸餾模型,為研究社區提供支持。
🚀 快速開始
運行模型
- 可查看所有版本獲取包含GGUF、4位和原始格式的DeepSeek-R1版本。
- 在
llama.cpp
中運行此模型的詳細說明可查看博客:unsloth.ai/blog/deepseek-r1。
本地運行
- DeepSeek-R1模型:請訪問DeepSeek-V3倉庫獲取更多運行信息。
- DeepSeek-R1-Distill模型:可像使用Qwen或Llama模型一樣使用。例如,使用vLLM啟動服務:
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
也可使用SGLang啟動服務:
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
免費微調
所有筆記本都對初學者友好!添加數據集,點擊“全部運行”,即可獲得快2倍的微調模型,該模型可導出為GGUF、vLLM格式或上傳到Hugging Face。
Unsloth支持的模型 | 免費筆記本鏈接 | 性能提升 | 內存使用減少 |
---|---|---|---|
Llama-3.2 (3B) | ▶️ 在Colab上開始 | 2.4倍快 | 58%少 |
Llama-3.2 (11B vision) | ▶️ 在Colab上開始 | 2倍快 | 60%少 |
Qwen2 VL (7B) | ▶️ 在Colab上開始 | 1.8倍快 | 60%少 |
Qwen2.5 (7B) | ▶️ 在Colab上開始 | 2倍快 | 60%少 |
Llama-3.1 (8B) | ▶️ 在Colab上開始 | 2.4倍快 | 58%少 |
Phi-3.5 (mini) | ▶️ 在Colab上開始 | 2倍快 | 50%少 |
Gemma 2 (9B) | ▶️ 在Colab上開始 | 2.4倍快 | 58%少 |
Mistral (7B) | ▶️ 在Colab上開始 | 2.2倍快 | 62%少 |
注意事項
- Llama 3.2對話筆記本適用於ShareGPT ChatML / Vicuna模板。
- 文本完成筆記本適用於原始文本。DPO筆記本可復現Zephyr。
- * Kaggle有2個T4,但我們使用1個。由於開銷,1個T4快5倍。
✨ 主要特性
模型介紹
- DeepSeek-R1-Zero:通過大規模強化學習(RL)訓練,無需監督微調(SFT)作為初步步驟,在推理方面表現出色,但存在無盡重複、可讀性差和語言混合等問題。
- DeepSeek-R1:在RL之前加入冷啟動數據,解決了DeepSeek-R1-Zero的問題,並進一步提升了推理性能,在數學、代碼和推理任務上的表現與OpenAI-o1相當。
模型訓練
- 後訓練:直接對基礎模型應用強化學習(RL),無需監督微調(SFT),開發出DeepSeek-R1-Zero。該模型具有自我驗證、反思和生成長思維鏈(CoT)的能力,是首次通過純RL驗證大語言模型推理能力的開放研究。
- 開發流程:引入開發DeepSeek-R1的流程,包括兩個RL階段和兩個SFT階段,旨在發現更好的推理模式並與人類偏好對齊。
模型蒸餾
證明了可以將大模型的推理模式蒸餾到小模型中,開源的DeepSeek-R1及其API將有助於研究社區未來蒸餾出更好的小模型。使用DeepSeek-R1生成的推理數據微調了幾個在研究社區廣泛使用的密集模型,評估結果表明蒸餾後的小密集模型在基準測試中表現出色。
📦 安裝指南
模型下載
DeepSeek-R1模型
模型 | 總參數數量 | 激活參數數量 | 上下文長度 | 下載鏈接 |
---|---|---|---|---|
DeepSeek-R1-Zero | 671B | 37B | 128K | 🤗 HuggingFace |
DeepSeek-R1 | 671B | 37B | 128K | 🤗 HuggingFace |
DeepSeek-R1-Distill模型
模型 | 基礎模型 | 下載鏈接 |
---|---|---|
DeepSeek-R1-Distill-Qwen-1.5B | Qwen2.5-Math-1.5B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-7B | Qwen2.5-Math-7B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Llama-8B | Llama-3.1-8B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-14B | Qwen2.5-14B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-32B | Qwen2.5-32B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Llama-70B | Llama-3.3-70B-Instruct | 🤗 HuggingFace |
💻 使用示例
基礎用法
在llama.cpp
中運行模型的示例:
./llama.cpp/llama-cli \
--model unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF/DeepSeek-R1-Distill-Llama-8B-Q4_K_M.gguf \
--cache-type-k q8_0 \
--threads 16 \
--prompt '<|User|>What is 1+1?<|Assistant|>' \
-no-cnv
高級用法
如果有GPU(如RTX 4090)且顯存為24GB,可以將多個層卸載到GPU以加快處理速度:
./llama.cpp/llama-cli \
--model unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF/DeepSeek-R1-Distill-Llama-8B-Q4_K_M.gguf \
--cache-type-k q8_0 \
--threads 16 \
--prompt '<|User|>What is 1+1?<|Assistant|>' \
--n-gpu-layers 20 \
-no-cnv
📚 詳細文檔
評估結果
DeepSeek-R1評估
所有模型的最大生成長度設置為32,768個令牌。對於需要採樣的基準測試,使用溫度為$0.6$,top-p值為$0.95$,每個查詢生成64個響應以估計pass@1。
類別 | 基準測試(指標) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
---|---|---|---|---|---|---|---|
架構 | - | - | MoE | - | - | MoE | |
激活參數數量 | - | - | 37B | - | - | 37B | |
總參數數量 | - | - | 671B | - | - | 671B | |
英語 | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | 91.8 | 90.8 |
MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | 92.9 | |
MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | 84.0 | |
DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | 92.2 | |
IF-Eval (Prompt Strict) | 86.5 | 84.3 | 86.1 | 84.8 | - | 83.3 | |
GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | 75.7 | 71.5 | |
SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | 47.0 | 30.1 | |
FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | 82.5 | |
AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | 87.6 | |
ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | 92.3 | |
代碼 | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | 65.9 |
Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | 96.6 | 96.3 | |
Codeforces (Rating) | 717 | 759 | 1134 | 1820 | 2061 | 2029 | |
SWE Verified (Resolved) | 50.8 | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | |
Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | 61.7 | 53.3 | |
數學 | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | 79.8 |
MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | 97.3 | |
CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | 78.8 | |
中文 | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | 92.8 |
C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | 91.8 | |
C-SimpleQA (Correct) | 55.4 | 58.7 | 68.0 | 40.3 | - | 63.7 |
蒸餾模型評估
模型 | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
---|---|---|---|---|---|---|
GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | 1820 |
QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
DeepSeek-R1-Distill-Qwen-32B | 72.6 | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
DeepSeek-R1-Distill-Llama-70B | 70.0 | 86.7 | 94.5 | 65.2 | 57.5 | 1633 |
聊天網站與API平臺
- 聊天網站:可在DeepSeek官方網站chat.deepseek.com上與DeepSeek-R1聊天,並切換“DeepThink”按鈕。
- API平臺:在DeepSeek平臺platform.deepseek.com提供與OpenAI兼容的API。
使用建議
在使用DeepSeek-R1系列模型(包括基準測試)時,建議遵循以下配置以獲得預期性能:
- 將溫度設置在0.5 - 0.7範圍內(建議0.6),以防止無盡重複或輸出不連貫。
- 避免添加系統提示;所有指令應包含在用戶提示中。
- 對於數學問題,建議在提示中包含指令,如:“請逐步推理,並將最終答案放在\boxed{}中。”
- 評估模型性能時,建議進行多次測試並取平均值。
🔧 技術細節
模型架構
DeepSeek-R1和DeepSeek-R1-Zero基於DeepSeek-V3-Base訓練,採用混合專家模型(MoE)架構,總參數數量為671B,激活參數數量為37B。
訓練方法
直接對基礎模型應用強化學習(RL),無需監督微調(SFT),開發出DeepSeek-R1-Zero。引入開發DeepSeek-R1的流程,包括兩個RL階段和兩個SFT階段,旨在發現更好的推理模式並與人類偏好對齊。
評估設置
所有模型的最大生成長度設置為32,768個令牌。對於需要採樣的基準測試,使用溫度為$0.6$,top-p值為$0.95$,每個查詢生成64個響應以估計pass@1。
📄 許可證
本代碼倉庫和模型權重遵循MIT許可證。DeepSeek-R1系列支持商業使用,允許進行任何修改和衍生作品,包括但不限於蒸餾訓練其他大語言模型。請注意:
- DeepSeek-R1-Distill-Qwen-1.5B、DeepSeek-R1-Distill-Qwen-7B、DeepSeek-R1-Distill-Qwen-14B和DeepSeek-R1-Distill-Qwen-32B源自Qwen-2.5系列,原許可證為Apache 2.0許可證,現在使用DeepSeek-R1精心策劃的800k樣本進行微調。
- DeepSeek-R1-Distill-Llama-8B源自Llama3.1-8B-Base,原許可證為llama3.1許可證。
- DeepSeek-R1-Distill-Llama-70B源自Llama3.3-70B-Instruct,原許可證為llama3.3許可證。
引用
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
聯繫我們
如果有任何問題,請提出問題或通過service@deepseek.com聯繫我們。



