J

Josiefied Qwen3 1.7B Abliterated V1 4bit

Developed by mlx-community
4-bit quantized version based on Qwen3-1.7B, a lightweight large language model optimized for the MLX framework
Downloads 135
Release Time : 4/29/2025

Model Overview

This model is a 4-bit quantized version converted from Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1, specifically optimized for the MLX framework and supports text generation tasks

Model Features

4-bit quantization
The model undergoes 4-bit quantization processing, significantly reducing memory usage and computational resource requirements
MLX optimization
Specifically optimized for the MLX framework, enabling efficient operation on Apple Silicon devices
Chat optimization
Supports chat templates, suitable for conversational application scenarios

Model Capabilities

Text generation
Dialogue systems
Content creation

Use Cases

Conversational applications
Smart chatbot
Build local chatbot applications for Apple devices
Achieves smooth conversational experiences on local devices
Content generation
Creative writing assistance
Helps users generate creative text content
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase