Josiefied Qwen3 4B Abliterated V1 4bit
This is a 4-bit quantized version of the Qwen3-4B model converted to MLX format, suitable for text generation tasks.
Downloads 175
Release Time : 5/1/2025
Model Overview
This model is converted from Goekdeniz-Guelmez/Josiefied-Qwen3-4B-abliterated-v1 to MLX format, primarily used for chat and text generation tasks.
Model Features
4-bit quantization
The model has undergone 4-bit quantization, reducing memory and computational resource requirements.
MLX compatibility
The model has been converted to MLX format and can run efficiently in the MLX framework.
Chat optimization
The model is optimized for chat scenarios, supporting conversational interactions.
Model Capabilities
Text generation
Conversational interaction
Use Cases
Chat applications
Intelligent dialogue
Used for building chatbots or virtual assistants
Can generate smooth and natural dialogue responses
Content creation
Text generation
Used for automatically generating articles, stories, and other content
Can generate coherent text content
Featured Recommended AI Models