Model Overview
Model Features
Model Capabilities
Use Cases
đ [MaziyarPanahi/rank_zephyr_7b_v1_full-GGUF]
This repository contains GGUF format model files for castorini/rank_zephyr_7b_v1_full, offering various quantization options for efficient use.
đ Quick Start
This model is in GGUF format, a new standard introduced by the llama.cpp team. To start using it, you need to download the appropriate GGUF file and use a compatible client or library.
⨠Features
- Multiple Quantization Options: Available in 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit quantization.
- Broad Compatibility: Supported by a wide range of clients and libraries, including llama.cpp, text-generation-webui, KoboldCpp, etc.
- Efficient Inference: Optimized for both CPU and GPU inference, with options to offload layers to the GPU.
đĻ Installation
Prerequisites
- Install the
huggingface-hub
Python library:
pip3 install huggingface-hub
- Optionally, install
hf_transfer
to accelerate downloads on fast connections:
pip3 install hf_transfer
Downloading the Model
- Using
huggingface-cli
:
huggingface-cli download MaziyarPanahi/rank_zephyr_7b_v1_full-GGUF rank_zephyr_7b_v1_full-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
- Downloading Multiple Files:
huggingface-cli download [MaziyarPanahi/rank_zephyr_7b_v1_full-GGUF](https://huggingface.co/MaziyarPanahi/rank_zephyr_7b_v1_full-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
đģ Usage Examples
Basic Usage - llama.cpp
Make sure you are using llama.cpp
from commit d0cee0d or later.
./main -ngl 35 -m rank_zephyr_7b_v1_full-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
Advanced Usage - Python with llama-cpp-python
Install the Package
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
Example Code
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./rank_zephyr_7b_v1_full-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./rank_zephyr_7b_v1_full-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
đ Documentation
About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is a list of clients and libraries known to support GGUF:
- llama.cpp. The source project for GGUF. Offers a CLI and a server option.
- text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
- KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
- GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
- LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
- LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
- Faraday.dev, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
- llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
- candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
- ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
Explanation of Quantisation Methods
The new quantization methods available are:
- GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
- GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
- GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
- GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
- GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
How to Run in text-generation-webui
Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 â Model Tab.md.
How to Use with LangChain
đ§ Technical Details
The model is based on castorini/rank_zephyr_7b_v1_full and quantized using various methods to reduce memory usage and improve inference speed. The quantization methods are designed to balance between compression ratio and model accuracy.
đ License
This model is licensed under the Apache-2.0 license and the MIT license.

