M

Mlabonne Gemma 3 12b It Abliterated GGUF

Developed by bartowski
Quantized version based on mlabonne/gemma-3-12b-it-abliterated model, using llama.cpp for imatrix quantization, suitable for text generation tasks.
Downloads 7,951
Release Time : 3/18/2025

Model Overview

This is a quantized version of the 12B parameter Gemma model, supporting text generation tasks and suitable for local inference environments.

Model Features

Efficient quantization
Utilizes llama.cpp's imatrix quantization technology, offering multiple quantization options to balance model size and performance.
Local inference support
Can run in local environments like LM Studio or llama.cpp, suitable for offline use.
Multiple quantization options
Provides various quantization levels from BF16 to Q2_K, catering to different hardware and performance needs.

Model Capabilities

Text generation
Dialogue systems
Instruction following

Use Cases

Dialogue systems
Intelligent assistant
Build locally-run intelligent conversational assistants
Content generation
Text creation
Used for generating articles, stories, and other creative content
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase