Deepseek V3 0324 Fused 4E 29B Unhealed Preview
Model Overview
Model Features
Model Capabilities
Use Cases
đ Unhealed DeepSeek-v3-0324-Instruct Fused Models (Research Release)
These are experimental, unhealed versions of DeepSeek-v3-0324-instruct created through model fusion, exclusively for research purposes.
đ Quick Start
CRITICAL NOTE: Untrained Fusion - Requires Healing!
These are unhealed, experimental versions of DeepSeek-v3-0324-instruct created through model fusion. They are not ready for direct use and will exhibit unpredictable behavior without significant post-training. These models are released exclusively for research purposes and require a specific "healing" process to restore functionality. Do not use these models without understanding and applying the healing procedure.
Preview Models: Exploring Compression Levels
The DeepSeek-V3-0324 model, which utilizes 256 experts, forms the foundation for these preview models. We offer four variations, each with a different level of compression:
- 8 Fused Experts, rank 4 (~39B parameters): 1/20 size reduction.
- 4 Fused Experts, rank 4 (~29B parameters): 1/23 size reduction.
Despite their significantly reduced size, these models demonstrate surprisingly strong performance, exceeding expectations for their parameter counts. Further, more comprehensive testing is planned.
What to Expect (Before Healing)
These models are in an initial, unstable state after the fusion process. Expect significantly degraded performance and unpredictable outputs. They are not representative of the final capabilities of a properly trained fused model. This is a very early iteration of the fusion and distillation technique, using a small sample size for distillation. Significant room for improvement remains in the distillation process.
Healing Instructions (Required)
Crucially, you must perform post-training to make these models usable. The necessary scripts and detailed instructions are available in the moe-pruner repository: https://github.com/gabrielolympie/moe-pruner
Follow the instructions in that repository carefully to "heal" the pruned model. This process is essential to recover performance.
Contributing and Future Improvements
This release represents an initial exploration of model fusion and distillation. Due to hardware limitations, significant compromises were made during development.
We welcome contributions to improve this work! There are two primary ways to help:
- Financial Support: Larger-scale experiments require significant compute resources. If you'd like to support future versions with a higher compute budget, you can donate here: https://gofund.me/1516dccd
- Code Contributions: Suggest improvements, bug fixes, or new features directly on the GitHub repository.
We are actively working to improve the fusion and distillation techniques, and your contributions are greatly appreciated.
Disclaimer
These models are provided "as is" for research purposes only. No guarantees are made regarding their performance or stability before the healing process is completed. Use at your own risk.
⨠Features
DeepSeek-V3-0324 demonstrates notable improvements over its predecessor, DeepSeek-V3, in several key aspects.
Reasoning Capabilities
- Significant improvements in benchmark performance:
- MMLU-Pro: 75.9 â 81.2 (+5.3)
- GPQA: 59.1 â 68.4 (+9.3)
- AIME: 39.6 â 59.4 (+19.8)
- LiveCodeBench: 39.2 â 49.2 (+10.0)
Front-End Web Development
- Improved the executability of the code
- More aesthetically pleasing web pages and game front-ends
Chinese Writing Proficiency
- Enhanced style and content quality:
- Aligned with the R1 writing style
- Better quality in medium-to-long-form writing
- Feature Enhancements
- Improved multi-turn interactive rewriting
- Optimized translation quality and letter writing
Chinese Search Capabilities
- Enhanced report analysis requests with more detailed outputs
Function Calling Improvements
- Increased accuracy in Function Calling, fixing issues from previous V3 versions
đ Documentation
Usage Recommendations
System Prompt
In the official DeepSeek web/app, we use the same system prompt with a specific date.
DeepSeek Chat, a large language model developed by DeepSeek
{current date}
For example,
DeepSeek Chat, a large language model developed by DeepSeek
March 24, 2024
Temperature
In our web and application environments, the temperature parameter $T_{model}$ is set to 0.3. Because many users use the default temperature 1.0 in API call, we have implemented an API temperature $T_{api}$ mapping mechanism that adjusts the input API temperature value of 1.0 to the most suitable model temperature setting of 0.3. $$ T_{model} = T_{api} \times 0.3 \quad (0 \leq T_{api} \leq 1) $$ $$ T_{model} = T_{api} - 0.7 \quad (1 < T_{api} \leq 2) $$ Thus, if you call V3 via API, temperature 1.0 equals to the model temperature 0.3.
Prompts for File Uploading and Web Search
For file uploading, please follow the template to create prompts, where {file_name}, {file_content} and {question} are arguments.
file_template = \
"""[file name]: {file_name}
[file content begin]
{file_content}
[file content end]
{question}"""
For Web Search, {search_results}, {cur_date}, and {question} are arguments.
For Chinese query, we use the prompt:
search_answer_zh_template = \
'''# The following contents are the search results related to the user's message:
{search_results}
In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer.
When responding, please keep the following points in mind:
- Today is {cur_date}.
- Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question.
- For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary.
- For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough.
- If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content.
- For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content.
- Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability.
- Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage.
- Unless the user requests otherwise, your response should be in the same language as the user's question.
# The user's message is:
{question}'''
For English query, we use the prompt:
search_answer_en_template = \
'''# The following contents are the search results related to the user's message:
{search_results}
In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer.
When responding, please keep the following points in mind:
- Today is {cur_date}.
- Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question.
- For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary.
- For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough.
- If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content.
- For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content.
- Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability.
- Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage.
- Unless the user requests otherwise, your response should be in the same language as the user's question.
# The user's message is:
{question}'''
How to Run Locally
The model structure of DeepSeek-V3-0324 is exactly the same as DeepSeek-V3. Please visit DeepSeek-V3 repo for more information about running this model locally.
This model supports features such as function calling, JSON output, and FIM completion. For instructions on how to construct prompts to use these features, please refer to DeepSeek-V2.5 repo.
NOTE: Hugging Face's Transformers has not been directly supported yet.
đ License
This repository and the model weights are licensed under the MIT License.
đ§ Technical Details
@misc{deepseekai2024deepseekv3technicalreport,
title={DeepSeek-V3 Technical Report},
author={DeepSeek-AI},
year={2024},
eprint={2412.19437},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.19437},
}
đ Contact
If you have any questions, please raise an issue or contact us at service@deepseek.com.

