Model Overview
Model Features
Model Capabilities
Use Cases
đ [MaziyarPanahi/OPEN-SOLAR-KO-10.7B-GGUF]
This repository contains GGUF format model files for beomi/OPEN-SOLAR-KO-10.7B, offering various quantization options for efficient text generation.
đ Quick Start
Prerequisites
- Ensure you have the necessary libraries installed for working with GGUF models. For example, you can use
huggingface-hub
to download model files.
pip3 install huggingface-hub
Downloading the Model
You can download a specific model file using the following command:
huggingface-cli download MaziyarPanahi/OPEN-SOLAR-KO-10.7B-GGUF OPEN-SOLAR-KO-10.7B-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
Running the Model
Here is an example command to run the model using llama.cpp
:
./main -ngl 35 -m OPEN-SOLAR-KO-10.7B-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
⨠Features
- Quantization Options: Supports multiple quantization methods (2-bit, 3-bit, 4-bit, 5-bit, 6-bit, 8-bit) for efficient storage and inference.
- Compatibility: Compatible with various clients and libraries such as
llama.cpp
,text-generation-webui
,KoboldCpp
, etc. - Extended Sequence Support: Can handle extended sequence lengths, with necessary RoPE scaling parameters read from the GGUF file automatically.
đĻ Installation
Installing Required Libraries
- huggingface-hub: For downloading model files.
pip3 install huggingface-hub
- llama-cpp-python: For using the model in Python code. You can choose different installation options based on your system's GPU support.
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
đģ Usage Examples
Basic Usage in llama.cpp
./main -ngl 35 -m OPEN-SOLAR-KO-10.7B-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
Advanced Usage in Python with llama-cpp-python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./OPEN-SOLAR-KO-10.7B-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./OPEN-SOLAR-KO-10.7B-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
đ Documentation
About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
- llama.cpp. The source project for GGUF. Offers a CLI and a server option.
- text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
- KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
- GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
- LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
- LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
- Faraday.dev, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
- llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
- candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
- ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
Explanation of quantisation methods
Click to see details
The new methods available are:
- GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
- GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
- GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
- GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
- GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
How to download GGUF files
Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
In text-generation-webui
Under Download Model, you can enter the model repo: MaziyarPanahi/OPEN-SOLAR-KO-10.7B-GGUF and below it, a specific filename to download, such as: OPEN-SOLAR-KO-10.7B-GGUF.Q4_K_M.gguf. Then click Download.
On the command line, including multiple files at once
You can download any individual model file to the current directory, at high speed, with a command like this:
huggingface-cli download MaziyarPanahi/OPEN-SOLAR-KO-10.7B-GGUF OPEN-SOLAR-KO-10.7B-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
You can also download multiple files at once with a pattern:
huggingface-cli download [MaziyarPanahi/OPEN-SOLAR-KO-10.7B-GGUF](https://huggingface.co/MaziyarPanahi/OPEN-SOLAR-KO-10.7B-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
For more documentation on downloading with huggingface-cli
, please see: HF -> Hub Python Library -> Download files -> Download from the CLI.
To accelerate downloads on fast connections (1Gbit/s or higher), install hf_transfer
:
pip3 install hf_transfer
And set environment variable HF_HUB_ENABLE_HF_TRANSFER
to 1
:
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/OPEN-SOLAR-KO-10.7B-GGUF OPEN-SOLAR-KO-10.7B-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
Windows Command Line users: You can set the environment variable by running set HF_HUB_ENABLE_HF_TRANSFER=1
before the download command.
How to run in text-generation-webui
Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 â Model Tab.md.
How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
đ§ Technical Details
Model Information
Property | Details |
---|---|
Model Type | GGUF format for beomi/OPEN-SOLAR-KO-10.7B |
Training Data | Not specified in the original README |
Command Explanation
-ngl
: Specifies the number of layers to offload to the GPU. Remove this option if you don't have GPU acceleration.-c
: Sets the desired sequence length. Longer sequence lengths require more resources.-p
: Defines the prompt for the model. For chat-style conversations, you can use-i -ins
instead.
đ License
This project is licensed under the Apache-2.0 license.

