Molmo 7B D Bnb 4bit
Molmo-7B-D is a large language model quantized with BnB 4-bit. The model size is reduced from 30GB to 7GB, and the video memory requirement is reduced to about 12GB.
Downloads 1,994
Release Time : 9/26/2024
Model Overview
Molmo-7B-D is an efficient large language model. The 4-bit quantization technology significantly reduces the model size and video memory requirement, making it suitable for deployment in resource-constrained environments.
Model Features
4-bit quantization
Through the BnB 4-bit quantization technology, the model size is compressed from 30GB to 7GB.
Low video memory requirement
After quantization, only about 12GB of VRAM is required to run, reducing the hardware threshold.
Model Capabilities
Text generation
Use Cases
Text processing
Text generation
It can be used to generate various types of text content.
Not provided
Featured Recommended AI Models
Š 2025AIbase