🚀 Jianyuan1/deepseek-r1-14b-cot-math-reasoning-full - GGUF
This repository provides GGUF format model files for Jianyuan1/deepseek-r1-14b-cot-math-reasoning-full. These files are quantized using machines from TensorBlock, and are compatible with llama.cpp as of commit b4882.
✨ Features
📦 Our projects
Project |
Description |
Image |
Link |
Awesome MCP Servers |
A comprehensive collection of Model Context Protocol (MCP) servers. |
 |
👀 See what we built 👀 |
TensorBlock Studio |
A lightweight, open, and extensible multi-LLM interaction studio. |
 |
👀 See what we built 👀 |
📚 Documentation
Prompt template
<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|>
Model file specification
Property |
Details |
Model Type |
GGUF format model files for Jianyuan1/deepseek-r1-14b-cot-math-reasoning-full |
Filename and Description |
|
deepseek-r1-14b-cot-math-reasoning-full-Q2_K.gguf |
Q2_K, 5.770 GB, smallest, significant quality loss - not recommended for most purposes |
deepseek-r1-14b-cot-math-reasoning-full-Q3_K_S.gguf |
Q3_K_S, 6.660 GB, very small, high quality loss |
deepseek-r1-14b-cot-math-reasoning-full-Q3_K_M.gguf |
Q3_K_M, 7.339 GB, very small, high quality loss |
deepseek-r1-14b-cot-math-reasoning-full-Q3_K_L.gguf |
Q3_K_L, 7.925 GB, small, substantial quality loss |
deepseek-r1-14b-cot-math-reasoning-full-Q4_0.gguf |
Q4_0, 8.518 GB, legacy; small, very high quality loss - prefer using Q3_K_M |
deepseek-r1-14b-cot-math-reasoning-full-Q4_K_S.gguf |
Q4_K_S, 8.573 GB, small, greater quality loss |
deepseek-r1-14b-cot-math-reasoning-full-Q4_K_M.gguf |
Q4_K_M, 8.988 GB, medium, balanced quality - recommended |
deepseek-r1-14b-cot-math-reasoning-full-Q5_0.gguf |
Q5_0, 10.267 GB, legacy; medium, balanced quality - prefer using Q4_K_M |
deepseek-r1-14b-cot-math-reasoning-full-Q5_K_S.gguf |
Q5_K_S, 10.267 GB, large, low quality loss - recommended |
deepseek-r1-14b-cot-math-reasoning-full-Q5_K_M.gguf |
Q5_K_M, 10.509 GB, large, very low quality loss - recommended |
deepseek-r1-14b-cot-math-reasoning-full-Q6_K.gguf |
Q6_K, 12.125 GB, very large, extremely low quality loss |
deepseek-r1-14b-cot-math-reasoning-full-Q8_0.gguf |
Q8_0, 15.702 GB, very large, extremely low quality loss - not recommended |
📦 Installation
Command line
Firstly, install Huggingface Client
pip install -U "huggingface_hub[cli]"
Then, download the individual model file to a local directory
huggingface-cli download tensorblock/deepseek-r1-14b-cot-math-reasoning-full-GGUF --include "deepseek-r1-14b-cot-math-reasoning-full-Q2_K.gguf" --local-dir MY_LOCAL_DIR
If you want to download multiple model files with a pattern (e.g., *Q4_K*gguf
), you can try:
huggingface-cli download tensorblock/deepseek-r1-14b-cot-math-reasoning-full-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
📄 License
This project is licensed under the MIT license.