G

Gemma 3 27b It Abliterated Mlx 3Bit

Developed by KYUNGYONG
This is a 3-bit quantized version converted from the mlabonne/gemma-3-27b-it-abliterated model, optimized for the MLX framework.
Downloads 129
Release Time : 3/21/2025

Model Overview

This model is a quantized large language model with 27B parameters, suitable for text generation tasks, and specifically optimized for running efficiency under the MLX framework.

Model Features

3-bit quantization
The model has undergone 3-bit quantization processing, significantly reducing memory usage and computational resource requirements.
MLX optimization
Optimized specifically for the MLX framework to improve running efficiency on Apple chips.
Large parameter scale
It has 27 billion parameters, with powerful language understanding and generation capabilities.

Model Capabilities

Text generation
Dialogue system
Instruction following

Use Cases

Dialogue system
Intelligent assistant
Can be used to build intelligent dialogue assistants
Content creation
Text generation
Automatically generate various types of text content
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase