L

Llama 3.2 3B Instruct Abliterated GGUF

Developed by MaziyarPanahi
GGUF-format quantized version of Llama-3.2-3B-Instruct-abliterated, supporting multiple bit quantization options, suitable for text generation tasks.
Downloads 181
Release Time : 10/20/2024

Model Overview

This model is a GGUF-format quantized version based on huihui-ai/Llama-3.2-3B-Instruct-abliterated, supporting quantization options from 2-bit to 8-bit, suitable for text generation tasks.

Model Features

Multiple Quantization Options
Supports 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit quantization options, facilitating deployment across different hardware environments.
GGUF Format Support
Uses the GGUF format, replacing the no longer supported GGML format, compatible with various clients and libraries.
Broad Client Support
Supports multiple clients and libraries such as llama.cpp, LM Studio, text-generation-webui, etc., for easy integration and usage.

Model Capabilities

Text Generation
Instruction Following

Use Cases

Text Generation
Dialogue Generation
Can be used to generate dialogue content, suitable for scenarios like chatbots.
Content Creation
Can be used for content creation tasks such as generating articles and stories.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase