🚀 BSC-LT/salamandra-2b-instruct - GGUF
This repository offers GGUF format model files for BSC-LT/salamandra-2b-instruct, providing a practical solution for various text - generation tasks.
📄 License
The project is licensed under the Apache - 2.0 license.
🌐 Language Support
This model supports multiple languages, including Bulgarian (bg
), Catalan (ca
), Code, Czech (cs
), Welsh (cy
), Danish (da
), German (de
), Greek (el
), English (en
), Spanish (es
), Estonian (et
), Basque (eu
), Finnish (fi
), French (fr
), Irish (ga
), Galician (gl
), Croatian (hr
), Hungarian (hu
), Italian (it
), Lithuanian (lt
), Latvian (lv
), Maltese (mt
), Dutch (nl
), Norwegian Nynorsk (nn
), Norwegian Bokmål (no
), Occitan (oc
), Polish (pl
), Portuguese (pt
), Romanian (ro
), Russian (ru
), Serbo - Croatian (sh
), Slovak (sk
), Slovenian (sl
), Serbian (sr
), Swedish (sv
), and Ukrainian (uk
).
📊 Datasets
The model is trained on the following datasets:
⚙️ Base Model
The base model used is BSC - LT/salamandra - 2b - instruct.
🏷️ Tags
The model is tagged with TensorBlock
and GGUF
.
🚀 Quick Start
This repo contains GGUF format model files for BSC - LT/salamandra - 2b - instruct. The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b4658.
🌟 Our Projects
Project |
Description |
Image |
Link |
Awesome MCP Servers |
A comprehensive collection of Model Context Protocol (MCP) servers. |
 |
GitHub |
TensorBlock Studio |
A lightweight, open, and extensible multi - LLM interaction studio. |
 |
GitHub |
💻 Prompt Template
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
📋 Model File Specification
Filename |
Quant type |
File Size |
Description |
salamandra-2b-instruct-Q2_K.gguf |
Q2_K |
1.087 GB |
smallest, significant quality loss - not recommended for most purposes |
salamandra-2b-instruct-Q3_K_S.gguf |
Q3_K_S |
1.215 GB |
very small, high quality loss |
salamandra-2b-instruct-Q3_K_M.gguf |
Q3_K_M |
1.277 GB |
very small, high quality loss |
salamandra-2b-instruct-Q3_K_L.gguf |
Q3_K_L |
1.317 GB |
small, substantial quality loss |
salamandra-2b-instruct-Q4_0.gguf |
Q4_0 |
1.410 GB |
legacy; small, very high quality loss - prefer using Q3_K_M |
salamandra-2b-instruct-Q4_K_S.gguf |
Q4_K_S |
1.447 GB |
small, greater quality loss |
salamandra-2b-instruct-Q4_K_M.gguf |
Q4_K_M |
1.506 GB |
medium, balanced quality - recommended |
salamandra-2b-instruct-Q5_0.gguf |
Q5_0 |
1.626 GB |
legacy; medium, balanced quality - prefer using Q4_K_M |
salamandra-2b-instruct-Q5_K_S.gguf |
Q5_K_S |
1.642 GB |
large, low quality loss - recommended |
salamandra-2b-instruct-Q5_K_M.gguf |
Q5_K_M |
1.691 GB |
large, very low quality loss - recommended |
salamandra-2b-instruct-Q6_K.gguf |
Q6_K |
1.920 GB |
very large, extremely low quality loss |
salamandra-2b-instruct-Q8_0.gguf |
Q8_0 |
2.401 GB |
very large, extremely low quality loss - not recommended |
📥 Downloading Instruction
Command line
Firstly, install Huggingface Client:
pip install -U "huggingface_hub[cli]"
Then, download the individual model file to a local directory:
huggingface-cli download tensorblock/salamandra-2b-instruct-GGUF --include "salamandra-2b-instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
If you want to download multiple model files with a pattern (e.g., *Q4_K*gguf
), you can try:
huggingface-cli download tensorblock/salamandra-2b-instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'