L

Longalpaca 13B GGUF

Developed by MaziyarPanahi
LongAlpaca-13B-GGUF is the GGUF quantized version of the Yukang/LongAlpaca-13B model, supporting 2-8 bit quantization options, suitable for local text generation tasks.
Downloads 285
Release Time : 2/26/2024

Model Overview

This model is the GGUF quantized version of LongAlpaca-13B, supporting multiple quantization levels, suitable for running text generation tasks on local devices.

Model Features

Multi-bit Quantization Support
Supports multiple quantization levels from 2-bit to 8-bit, allowing users to choose the most suitable version based on hardware conditions.
Long Context Support
Supports context lengths up to 32768 tokens, ideal for handling long text generation tasks.
Local Operation Optimization
The GGUF format is optimized for local operation, supporting various clients and libraries, including llama.cpp and text-generation-webui.

Model Capabilities

Text Generation
Long Text Processing

Use Cases

Content Creation
Long Article Generation
Utilizes the model's long-context capability to generate coherent long articles or reports.
Dialogue Systems
Long Dialogue Maintenance
Maintains contextual consistency in long dialogues within dialogue systems.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase