S

Smollm 135M 4bit

Developed by mlx-community
This is a 4-bit quantized 135M parameter small language model, suitable for text generation tasks in resource-constrained environments.
Downloads 312
Release Time : 7/16/2024

Model Overview

This model is a 4-bit quantized version converted from HuggingFaceTB/SmolLM-135M, primarily used for efficient text generation.

Model Features

4-bit quantization
The model has undergone 4-bit quantization, significantly reducing memory usage and computational resource requirements.
Lightweight
With 135M parameters, it is suitable for running in resource-constrained environments.
Efficient inference
Optimized for the MLX framework, providing efficient inference performance on Apple chips.

Model Capabilities

English text generation
Dialogue systems
Content creation

Use Cases

Resource-constrained environment applications
Mobile applications
Deploy lightweight text generation features on mobile devices
Edge computing
Implement localized language processing on edge devices
Development and testing
Prototype development
Quickly build language model application prototypes
Teaching and research
Used for small language model experiments in teaching and research
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase