L

Llama 3 8B Instruct 64k GGUF

Developed by MaziyarPanahi
GGUF quantized version of Llama-3-8B-Instruct-64k, supporting multiple bit quantizations, suitable for text generation tasks.
Downloads 201.57k
Release Time : 4/25/2024

Model Overview

This model is the GGUF format version of Llama-3-8B-Instruct-64k, primarily used for text generation tasks, supporting quantization options from 2-bit to 8-bit.

Model Features

Multi-bit quantization support
Supports 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit quantization options, catering to different hardware requirements.
GGUF format
Uses GGUF format, replacing the old GGML format, compatible with various clients and libraries.
64k context length
Supports context lengths up to 64k, ideal for long text generation tasks.

Model Capabilities

Text generation
Instruction following
Long text processing

Use Cases

Text generation
Dialogue systems
Can be used to build dialogue systems, generating natural and fluent responses.
Story creation
Suitable for long-form story creation, supporting coherent contextual generation.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase