G

Gemma 3 1b It Abliterated GGUF

Developed by matrixportal
A GGUF format model converted from mlabonne/gemma-3-1b-it-abliterated, suitable for local inference tasks
Downloads 333
Release Time : 3/31/2025

Model Overview

This is a quantized Gemma 3B instruction-tuned model, converted to GGUF format for efficient operation on local devices

Model Features

Efficient Quantization
Provides multiple quantization options, from Q2_K to F16, to meet different hardware and performance requirements
Local Running Support
The GGUF format is optimized for local inference and can run on various devices
Instruction Tuning
Tuned with instructions, more suitable for dialogue and task completion scenarios

Model Capabilities

Text Generation
Dialogue Interaction
Task Completion
Code Generation
Content Creation

Use Cases

Personal Assistant
Local AI Assistant
Run a private AI assistant on a personal computer or mobile device
Get AI assistance without an internet connection
Development Tool
Code Assistance
Help developers write and debug code
Improve development efficiency
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase