模型概述
模型特點
模型能力
使用案例
🚀 DeepSeek-R1
DeepSeek-R1是第一代推理模型,在數學、代碼和推理任務上取得了與OpenAI-o1相當的性能。Unsloth Dynamic v2.0在量化方面表現出色,其DeepSeek-R1的1.58-bit + 2-bit動態量化有選擇性地進行量化,大大提高了準確性。本項目還提供了模型的運行說明、評估結果、使用建議等內容,助力相關研究。
🚀 快速開始
運行模型
在llama.cpp
中運行此模型的說明如下,你也可以在unsloth.ai/blog/deepseekr1-dynamic查看更詳細的說明:
- 不要忘記
<|User|>
和<|Assistant|>
標記!或者使用聊天模板格式化工具。 - 從https://github.com/ggerganov/llama.cpp 獲取最新的
llama.cpp
。你也可以按照以下構建說明操作:
apt-get update
apt-get install build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggerganov/llama.cpp
cmake llama.cpp -B llama.cpp/build \
-DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp
- 最好使用
--min-p 0.05
來抵消非常罕見的令牌預測,特別是對於1.58位模型,此設置效果良好。 - 通過以下方式下載模型:
# pip install huggingface_hub hf_transfer
# import os # Optional for faster downloading
# os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
repo_id = "unsloth/DeepSeek-R1-GGUF",
local_dir = "DeepSeek-R1-GGUF",
allow_patterns = ["*UD-IQ1_S*"], # Select quant type UD-IQ1_S for 1.58bit
)
- 使用Q8_0 K量化緩存的示例(注意
-no-cnv
禁用自動對話模式)
./llama.cpp/llama-cli \
--model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
--cache-type-k q8_0 \
--threads 12 -no-cnv --prio 2 \
--temp 0.6 \
--ctx-size 8192 \
--seed 3407 \
--prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|>"
示例輸出:
<think>
Okay, so I need to figure out what 1 plus 1 is. Hmm, where do I even start? I remember from school that adding numbers is pretty basic, but I want to make sure I understand it properly.
Let me think, 1 plus 1. So, I have one item and I add another one. Maybe like a apple plus another apple. If I have one apple and someone gives me another, I now have two apples. So, 1 plus 1 should be 2. That makes sense.
Wait, but sometimes math can be tricky. Could it be something else? Like, in a different number system maybe? But I think the question is straightforward, using regular numbers, not like binary or hexadecimal or anything.
I also recall that in arithmetic, addition is combining quantities. So, if you have two quantities of 1, combining them gives you a total of 2. Yeah, that seems right.
Is there a scenario where 1 plus 1 wouldn't be 2? I can't think of any...
- 如果你有一個24GB的GPU(例如RTX 4090),可以將多個層卸載到GPU以加快處理速度。如果你有多個GPU,可以卸載更多層。
./llama.cpp/llama-cli \
--model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
--cache-type-k q8_0 \
--threads 12 -no-cnv --prio 2 \
--n-gpu-layers 7 \
--temp 0.6 \
--ctx-size 8192 \
--seed 3407 \
--prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|>"
- 如果你想合併權重,可以使用以下腳本:
./llama.cpp/llama-gguf-split --merge \
DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
merged_file.gguf
微調模型
我們有一個免費的Google Colab筆記本,可將Llama 3.1 (8B) 轉換為推理模型:點擊查看
所有筆記本都對初學者友好!添加你的數據集,點擊“全部運行”,你將獲得一個速度快2倍的微調模型,該模型可以導出為GGUF、vLLM格式或上傳到Hugging Face。
Unsloth支持的模型 | 免費筆記本鏈接 | 性能提升 | 內存使用減少 |
---|---|---|---|
GRPO with Phi-4 (14B) | ▶️ 點擊在Colab上開始 | 2倍速度提升 | 減少80% |
Llama-3.2 (3B) | ▶️ 點擊在Colab上開始 | 2.4倍速度提升 | 減少58% |
Llama-3.2 (11B vision) | ▶️ 點擊在Colab上開始 | 2倍速度提升 | 減少60% |
Qwen2 VL (7B) | ▶️ 點擊在Colab上開始 | 1.8倍速度提升 | 減少60% |
Qwen2.5 (7B) | ▶️ 點擊在Colab上開始 | 2倍速度提升 | 減少60% |
Llama-3.1 (8B) | ▶️ 點擊在Colab上開始 | 2.4倍速度提升 | 減少58% |
Phi-3.5 (mini) | ▶️ 點擊在Colab上開始 | 2倍速度提升 | 減少50% |
Gemma 2 (9B) | ▶️ 點擊在Colab上開始 | 2.4倍速度提升 | 減少58% |
Mistral (7B) | ▶️ 點擊在Colab上開始 | 2.2倍速度提升 | 減少62% |
注意事項
- Llama 3.2對話筆記本適用於ShareGPT ChatML / Vicuna模板。
- 文本完成筆記本適用於原始文本。DPO筆記本可複製Zephyr。
- *Kaggle有2個T4 GPU,但我們使用1個。由於開銷問題,1個T4 GPU的速度快5倍。
✨ 主要特性
Unsloth Dynamic v2.0優勢
- 高精度:Unsloth Dynamic v2.0實現了卓越的準確性,優於其他領先的量化方法。
- 選擇性量化:Unsloth的DeepSeek-R1 1.58位 + 2位動態量化進行了選擇性量化,大大提高了相對於標準1位/2位量化的準確性。
DeepSeek-R1模型特性
- 推理能力強:DeepSeek-R1在數學、代碼和推理任務上取得了與OpenAI-o1相當的性能。
- 開源共享:開源了DeepSeek-R1-Zero、DeepSeek-R1以及從DeepSeek-R1蒸餾得到的六個基於Llama和Qwen的密集模型,其中DeepSeek-R1-Distill-Qwen-32B在各種基準測試中優於OpenAI-o1-mini,為密集模型創造了新的最優結果。
模型訓練與優化
- 強化學習:直接對基礎模型應用強化學習(RL),無需監督微調(SFT)作為初步步驟,開發出DeepSeek-R1-Zero,該模型展示了自我驗證、反思和生成長思維鏈(CoT)等能力。
- 多階段訓練:開發DeepSeek-R1的管道包含兩個RL階段和兩個SFT階段,旨在發現更好的推理模式並與人類偏好對齊。
📦 安裝指南
運行環境準備
請參考上述“快速開始”部分的步驟,完成llama.cpp
的安裝、模型的下載等操作。
模型運行設置
在運行DeepSeek-R1系列模型時,建議遵循以下配置以獲得預期性能:
- 將溫度設置在0.5 - 0.7範圍內(建議0.6),以防止無限重複或輸出不連貫。
- 避免添加系統提示,所有指令應包含在用戶提示中。
- 對於數學問題,建議在提示中包含指令,如“請逐步推理,並將最終答案放在\boxed{}內”。
- 評估模型性能時,建議進行多次測試並取平均值。
📚 詳細文檔
模型介紹
我們推出了第一代推理模型DeepSeek-R1-Zero和DeepSeek-R1。DeepSeek-R1-Zero通過大規模強化學習(RL)訓練,無需監督微調(SFT)作為初步步驟,在推理方面表現出色,但存在無盡重複、可讀性差和語言混合等問題。為解決這些問題並進一步提升推理性能,我們引入了DeepSeek-R1,它在RL之前加入了冷啟動數據。
模型訓練
- 強化學習:直接對基礎模型應用RL,開發出DeepSeek-R1-Zero,該模型展示了自我驗證、反思和生成長CoT等能力,是首次通過純RL激勵大語言模型(LLM)推理能力的公開研究。
- 多階段訓練:開發DeepSeek-R1的管道包含兩個RL階段和兩個SFT階段,旨在發現更好的推理模式並與人類偏好對齊。
模型下載
DeepSeek-R1模型
模型名稱 | 總參數數量 | 激活參數數量 | 上下文長度 | 下載鏈接 |
---|---|---|---|---|
DeepSeek-R1-Zero | 671B | 37B | 128K | 🤗 HuggingFace |
DeepSeek-R1 | 671B | 37B | 128K | 🤗 HuggingFace |
DeepSeek-R1-Distill模型
模型名稱 | 基礎模型 | 下載鏈接 |
---|---|---|
DeepSeek-R1-Distill-Qwen-1.5B | Qwen2.5-Math-1.5B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-7B | Qwen2.5-Math-7B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Llama-8B | Llama-3.1-8B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-14B | Qwen2.5-14B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-32B | Qwen2.5-32B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Llama-70B | Llama-3.3-70B-Instruct | 🤗 HuggingFace |
評估結果
DeepSeek-R1評估
所有模型的最大生成長度設置為32,768個令牌。對於需要採樣的基準測試,使用溫度為0.6、top-p值為0.95,併為每個查詢生成64個響應以估計pass@1。
類別 | 基準測試(指標) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
---|---|---|---|---|---|---|---|
架構 | - | - | MoE | - | - | MoE | |
激活參數數量 | - | - | 37B | - | - | 37B | |
總參數數量 | - | - | 671B | - | - | 671B | |
英語 | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | 91.8 | 90.8 |
MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | 92.9 | |
MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | 84.0 | |
DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | 92.2 | |
IF-Eval (Prompt Strict) | 86.5 | 84.3 | 86.1 | 84.8 | - | 83.3 | |
GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | 75.7 | 71.5 | |
SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | 47.0 | 30.1 | |
FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | 82.5 | |
AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | 87.6 | |
ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | 92.3 | |
代碼 | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | 65.9 |
Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | 96.6 | 96.3 | |
Codeforces (Rating) | 717 | 759 | 1134 | 1820 | 2061 | 2029 | |
SWE Verified (Resolved) | 50.8 | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | |
Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | 61.7 | 53.3 | |
數學 | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | 79.8 |
MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | 97.3 | |
CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | 78.8 | |
中文 | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | 92.8 |
C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | 91.8 | |
C-SimpleQA (Correct) | 55.4 | 58.7 | 68.0 | 40.3 | - | 63.7 |
蒸餾模型評估
模型 | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
---|---|---|---|---|---|---|
GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | 1820 |
QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
DeepSeek-R1-Distill-Qwen-32B | 72.6 | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
DeepSeek-R1-Distill-Llama-70B | 70.0 | 86.7 | 94.5 | 65.2 | 57.5 | 1633 |
模型使用
在線聊天與API平臺
- 在線聊天:你可以在DeepSeek的官方網站chat.deepseek.com上與DeepSeek-R1聊天,並切換“DeepThink”按鈕。
- API平臺:我們在DeepSeek平臺platform.deepseek.com提供了與OpenAI兼容的API。
本地運行
- DeepSeek-R1模型:請訪問DeepSeek-V3倉庫獲取更多關於在本地運行DeepSeek-R1的信息。
- DeepSeek-R1-Distill模型:可以像使用Qwen或Llama模型一樣使用DeepSeek-R1-Distill模型。例如,你可以使用vLLM輕鬆啟動服務:
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
也可以使用SGLang啟動服務:
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
🔧 技術細節
模型訓練技術
- 強化學習應用:直接對基礎模型應用強化學習,無需監督微調作為初步步驟,使模型能夠探索思維鏈(CoT)來解決複雜問題。
- 多階段訓練策略:開發DeepSeek-R1的管道包含兩個RL階段和兩個SFT階段,通過不同階段的訓練,發現更好的推理模式並與人類偏好對齊。
模型量化技術
Unsloth的DeepSeek-R1採用1.58位 + 2位動態量化,進行選擇性量化,提高了相對於標準1位/2位量化的準確性。
模型評估設置
所有模型的最大生成長度設置為32,768個令牌。對於需要採樣的基準測試,使用溫度為0.6、top-p值為0.95,併為每個查詢生成64個響應以估計pass@1。
📄 許可證
本代碼倉庫和模型權重遵循MIT許可證。DeepSeek-R1系列支持商業使用,允許進行任何修改和衍生作品,包括但不限於蒸餾訓練其他大語言模型。需要注意的是:
- DeepSeek-R1-Distill-Qwen-1.5B、DeepSeek-R1-Distill-Qwen-7B、DeepSeek-R1-Distill-Qwen-14B和DeepSeek-R1-Distill-Qwen-32B源自Qwen-2.5系列,原許可證為Apache 2.0許可證,現在使用DeepSeek-R1生成的800k樣本進行了微調。
- DeepSeek-R1-Distill-Llama-8B源自Llama3.1-8B-Base,原許可證為llama3.1許可證。
- DeepSeek-R1-Distill-Llama-70B源自Llama3.3-70B-Instruct,原許可證為llama3.3許可證。
致謝
非常感謝DeepSeek團隊創建併發布這些模型。
引用
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
聯繫我們
如果您有任何問題,請提出問題或通過service@deepseek.com聯繫我們。



