🚀 m-a-p/YuE-s1-7B-anneal-en-cot - GGUF
This repository offers GGUF format model files for m-a-p/YuE-s1-7B-anneal-en-cot. These files are quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit ec7f3ac.

🚀 Quick Start
The following sections provide detailed information about the project, including other projects, prompt templates, model file specifications, and downloading instructions.
✨ Features
Our projects
Project |
Description |
Image |
Link |
Awesome MCP Servers |
A comprehensive collection of Model Context Protocol (MCP) servers. |
 |
See what we built |
TensorBlock Studio |
A lightweight, open, and extensible multi-LLM interaction studio. |
 |
See what we built |
Prompt template
Model file specification
Property |
Details |
Filename |
YuE-s1-7B-anneal-en-cot-Q2_K.gguf, YuE-s1-7B-anneal-en-cot-Q3_K_S.gguf, YuE-s1-7B-anneal-en-cot-Q3_K_M.gguf, YuE-s1-7B-anneal-en-cot-Q3_K_L.gguf, YuE-s1-7B-anneal-en-cot-Q4_0.gguf, YuE-s1-7B-anneal-en-cot-Q4_K_S.gguf, YuE-s1-7B-anneal-en-cot-Q4_K_M.gguf, YuE-s1-7B-anneal-en-cot-Q5_0.gguf, YuE-s1-7B-anneal-en-cot-Q5_K_S.gguf, YuE-s1-7B-anneal-en-cot-Q5_K_M.gguf, YuE-s1-7B-anneal-en-cot-Q6_K.gguf, YuE-s1-7B-anneal-en-cot-Q8_0.gguf |
Quant type |
Q2_K, Q3_K_S, Q3_K_M, Q3_K_L, Q4_0, Q4_K_S, Q4_K_M, Q5_0, Q5_K_S, Q5_K_M, Q6_K, Q8_0 |
File Size |
2.432 GB, 2.812 GB, 3.096 GB, 3.340 GB, 3.593 GB, 3.617 GB, 3.788 GB, 4.328 GB, 4.328 GB, 4.429 GB, 5.109 GB, 6.617 GB |
Description |
smallest, significant quality loss - not recommended for most purposes; very small, high quality loss; very small, high quality loss; small, substantial quality loss; legacy; small, very high quality loss - prefer using Q3_K_M; small, greater quality loss; medium, balanced quality - recommended; legacy; medium, balanced quality - prefer using Q4_K_M; large, low quality loss - recommended; large, very low quality loss - recommended; very large, extremely low quality loss; very large, extremely low quality loss - not recommended |
📦 Installation
Downloading instruction
Command line
Firstly, install Huggingface Client
pip install -U "huggingface_hub[cli]"
Then, download the individual model file to a local directory
huggingface-cli download tensorblock/YuE-s1-7B-anneal-en-cot-GGUF --include "YuE-s1-7B-anneal-en-cot-Q2_K.gguf" --local-dir MY_LOCAL_DIR
If you want to download multiple model files with a pattern (e.g., *Q4_K*gguf
), you can try:
huggingface-cli download tensorblock/YuE-s1-7B-anneal-en-cot-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
📄 License
This project is licensed under the CC BY-NC 4.0 license.