🚀 NikolayKozloff/gemma-3-4b-it-Q8_0-GGUF
This model is a GGUF - formatted conversion from google/gemma-3-4b-it
. It offers an accessible way to utilize the model through llama.cpp.
🚀 Quick Start
This model was converted to GGUF format from google/gemma-3-4b-it
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
✨ Features
- Model Source: Converted from
google/gemma-3-4b-it
.
- Format: GGUF, which is compatible with llama.cpp.
- Access Requirement: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click here. Requests are processed immediately.
📦 Installation
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
💻 Usage Examples
Basic Usage
CLI
llama-cli --hf-repo NikolayKozloff/gemma-3-4b-it-Q8_0-GGUF --hf-file gemma-3-4b-it-q8_0.gguf -p "The meaning to life and the universe is"
Server
llama-server --hf-repo NikolayKozloff/gemma-3-4b-it-Q8_0-GGUF --hf-file gemma-3-4b-it-q8_0.gguf -c 2048
Advanced Usage
You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo.
Step 1: Clone llama.cpp from GitHub
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it
Move into the llama.cpp folder and build it with LLAMA_CURL=1
flag along with other hardware - specific flags (for ex: LLAMA_CUDA=1
for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary
./llama-cli --hf-repo NikolayKozloff/gemma-3-4b-it-Q8_0-GGUF --hf-file gemma-3-4b-it-q8_0.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo NikolayKozloff/gemma-3-4b-it-Q8_0-GGUF --hf-file gemma-3-4b-it-q8_0.gguf -c 2048
📄 License
This model is under the gemma
license.
⚠️ Important Note
To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click here. Requests are processed immediately.
💡 Usage Tip
You can adjust the hardware - specific flags according to your system configuration when building llama.cpp, such as enabling CUDA support for Nvidia GPUs.