D

Deepseek V3 0324 GGUF

Developed by MaziyarPanahi
GGUF quantized version of DeepSeek-V3-0324, suitable for local text generation tasks.
Downloads 97.25k
Release Time : 3/24/2025

Model Overview

This model is the GGUF quantized version of DeepSeek-V3-0324, primarily used for text generation tasks and supports 2-bit precision quantization.

Model Features

GGUF Format Support
Adopts the latest GGUF format, replacing the discontinued GGML format, and is compatible with various clients and libraries.
2-bit Precision Quantization
Supports 2-bit precision quantization, ideal for running on resource-limited devices.
Broad Client Compatibility
Compatible with various popular local clients and libraries, such as llama.cpp, LM Studio, text-generation-webui, etc.

Model Capabilities

Text Generation

Use Cases

Local Text Generation
Story Generation
Perform local story generation using tools like KoboldCpp.
Dialogue System
Build a local dialogue system using tools like text-generation-webui.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase