D

Deepseek V2 Chat GGUF

Developed by leafspark
The GGUF quantized version of DeepSeek-V2-Chat, suitable for local deployment and operation.
Downloads 1,388
Release Time : 5/17/2024

Model Overview

DeepSeek-V2-Chat is a large language model based on GGUF quantization, supporting Chinese and English text generation tasks. This model is quantized through llama.cpp and is suitable for local inference.

Model Features

Support for multiple quantization versions
Provide multiple quantization versions from BF16 to IQ1_M to meet different hardware and performance requirements.
Efficient local operation
Support local deployment through llama.cpp, suitable for inference scenarios without cloud dependencies.
Support for Chinese and English
The model supports Chinese and English text generation tasks, suitable for multilingual application scenarios.

Model Capabilities

Text generation
Chat completion
Code generation

Use Cases

Chat applications
Command-line chat mode
Run the command-line chat mode through llama.cpp, supporting interactive dialogue.
API services
OpenAI-compatible server
Deploy as an OpenAI-compatible API service, supporting remote calls.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase