đ Qwen/Qwen2-VL-2B - GGUF
This repository offers GGUF format model files for Qwen/Qwen2-VL-2B. These files are quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b4329.

đ Quick Start
This section provides a quick guide on how to get started with the Qwen/Qwen2-VL-2B - GGUF model.
⨠Features
- Multimodal Support: This model supports image - text - to - text tasks, making it suitable for a wide range of multimodal applications.
- GGUF Format: The model files are in GGUF format, which is compatible with llama.cpp.
đĻ Installation
Command line
- First, install the Huggingface Client:
pip install -U "huggingface_hub[cli]"
- Then, download the individual model file to a local directory:
huggingface-cli download tensorblock/Qwen2-VL-2B-GGUF --include "Qwen2-VL-2B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
- If you want to download multiple model files with a pattern (e.g.,
*Q4_K*gguf
), you can try:
huggingface-cli download tensorblock/Qwen2-VL-2B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
đ Documentation
Our projects
Prompt template
Model file specification
Property |
Details |
Filename |
Qwen2-VL-2B-Q2_K.gguf, Qwen2-VL-2B-Q3_K_S.gguf, etc. |
Quant type |
Q2_K, Q3_K_S, Q3_K_M, etc. |
File Size |
Ranging from 0.676 GB to 1.647 GB |
Description |
Varying levels of quality loss and file size trade - offs |
Filename |
Quant type |
File Size |
Description |
Qwen2-VL-2B-Q2_K.gguf |
Q2_K |
0.676 GB |
smallest, significant quality loss - not recommended for most purposes |
Qwen2-VL-2B-Q3_K_S.gguf |
Q3_K_S |
0.761 GB |
very small, high quality loss |
Qwen2-VL-2B-Q3_K_M.gguf |
Q3_K_M |
0.824 GB |
very small, high quality loss |
Qwen2-VL-2B-Q3_K_L.gguf |
Q3_K_L |
0.880 GB |
small, substantial quality loss |
Qwen2-VL-2B-Q4_0.gguf |
Q4_0 |
0.935 GB |
legacy; small, very high quality loss - prefer using Q3_K_M |
Qwen2-VL-2B-Q4_K_S.gguf |
Q4_K_S |
0.940 GB |
small, greater quality loss |
Qwen2-VL-2B-Q4_K_M.gguf |
Q4_K_M |
0.986 GB |
medium, balanced quality - recommended |
Qwen2-VL-2B-Q5_0.gguf |
Q5_0 |
1.099 GB |
legacy; medium, balanced quality - prefer using Q4_K_M |
Qwen2-VL-2B-Q5_K_S.gguf |
Q5_K_S |
1.099 GB |
large, low quality loss - recommended |
Qwen2-VL-2B-Q5_K_M.gguf |
Q5_K_M |
1.125 GB |
large, very low quality loss - recommended |
Qwen2-VL-2B-Q6_K.gguf |
Q6_K |
1.273 GB |
very large, extremely low quality loss |
Qwen2-VL-2B-Q8_0.gguf |
Q8_0 |
1.647 GB |
very large, extremely low quality loss - not recommended |
đ License
This project is licensed under the Apache - 2.0 license.