# Code Generation
Diffucoder 7B Cpgrpo 6bit
DiffuCoder-7B-cpGRPO-6bit is a text generation model converted based on the MLX format, focusing on code and text diffusion tasks.
Large Language Model Other
D
mlx-community
103
1
Qwen Qwen2.5 Coder 1.5B GGUF
The GGUF quantized version of Qwen2.5-Coder-1.5B, optimized for code generation tasks, offering multiple quantization options to balance performance and resource consumption.
Large Language Model
Q
featherless-ai-quants
228
1
Whiterabbitneo WhiteRabbitNeo V3 7B GGUF
Apache-2.0
Llamacpp imatrix quantized version based on WhiteRabbitNeo-V3-7B, specializing in cybersecurity and DevOps tasks with code generation support.
Large Language Model
W
bartowski
1,166
2
Seed Coder 8B Reasoning GGUF
MIT
Seed-Coder-8B-Reasoning is an open-source code model with a scale of 8B, focusing on code generation and reasoning tasks, with powerful performance and efficient parameter utilization.
Large Language Model
Transformers

S
unsloth
2,550
2
Qwen2.5 Coder 7B NEP Fix
Apache-2.0
A text generation and inference model optimized using Unsloth and TRL libraries based on the Qwen/Qwen2.5-Coder-7B model, achieving 2x faster training speed
Large Language Model
Transformers English

Q
lurf21
20
1
Bytedance Seed.Seed Coder 8B Reasoning GGUF
Seed-Coder-8B-Reasoning is a large language model with 8B parameters developed by ByteDance-Seed, focusing on code generation and reasoning tasks.
Large Language Model
B
DevQuasar
1,978
1
Andrewzh Absolute Zero Reasoner Coder 7b GGUF
Llamacpp quantized version based on andrewzh's Absolute_Zero_Reasoner-Coder-7b model, supporting multiple quantization levels, suitable for reasoning and code generation tasks.
Large Language Model
A
bartowski
1,325
5
Avern 1.5 Mintra
MIT
Qwen2.5-Coder-7B-Instruct is a 7B-parameter code generation model based on the Qwen2.5 architecture, specializing in instruction fine-tuning, suitable for code generation and programming assistance tasks.
Large Language Model
A
averntech
87
1
Ophiuchi Qwen3 14B Instruct
Apache-2.0
An instruction-tuned model based on the Qwen3-14B architecture, specializing in mathematical reasoning, code generation, and factual accuracy
Large Language Model
Transformers Supports Multiple Languages

O
prithivMLmods
21
3
Falcon H1 1.5B Instruct
Other
Falcon-H1 is an efficient hybrid architecture language model developed by TII, combining the strengths of Transformers and Mamba architectures, supporting English and multilingual tasks.
Large Language Model
Transformers

F
tiiuae
1,022
4
Seed Coder 8B Reasoning
MIT
Seed-Coder-8B-Reasoning is an 8B-scale open-source code model with enhanced reasoning capabilities through reinforcement learning, supporting a context length of 65,536 and excelling in programming tasks.
Large Language Model
Transformers

S
ByteDance-Seed
4,622
102
Tablellm 13b
TableLLM is a large language model specifically designed for handling table data manipulation tasks, catering to real-world office scenarios involving table data processing.
Large Language Model
Transformers English

T
RUCKBReasoning
100
27
Gemma 3 27b It Qat GGUF
Gemma 3 is a lightweight open model series built by Google based on Gemini technology, supporting multimodal input and text output, featuring a 128K large context window and support for 140+ languages.
Text-to-Image English
G
unsloth
2,683
3
Phi 4 Reasoning Plus
MIT
Phi-4-reasoning-plus is an advanced open-weight reasoning model developed by Microsoft Research, optimized through supervised fine-tuning and reinforcement learning based on Phi-4, focusing on advanced reasoning capabilities in mathematics, science, and coding fields.
Large Language Model
Transformers Supports Multiple Languages

P
microsoft
19.83k
261
Deepcoder 14B Preview Exl2
DeepCoder-14B-Preview is a code generation model developed based on DeepSeek-R1-Distill-Qwen-14B, focusing on solving verifiable programming problems.
Large Language Model English
D
cgus
46
2
Phi 4 Reasoning
MIT
Phi-4 Reasoning is a cutting-edge open-weight reasoning model based on Phi-4, fine-tuned with supervised chain-of-thought trajectory datasets and trained via reinforcement learning, specializing in mathematics, science, and programming skills.
Large Language Model
Transformers Supports Multiple Languages

P
microsoft
11.31k
172
Huihui Ai.deepseek V3 0324 Pruned Coder 411B GGUF
DeepSeek-V3-0324-Pruned-Coder-411B is a pruned and optimized code generation model based on the DeepSeek-V3 architecture, focusing on code generation tasks.
Large Language Model
H
DevQuasar
2,706
2
Qwen2.5 14B YOYO V5
Apache-2.0
The fifth-generation Qwen2.5-YOYO model integrates features from multiple advanced models, optimizes the model merging formula, and supports a context length of 1 million tokens.
Large Language Model Supports Multiple Languages
Q
YOYO-AI
33
3
Gemma 3 12b It Codeforces SFT
A large language model fine-tuned on the codeforces-cots dataset based on google/gemma-3-12b-it
Large Language Model
Transformers

G
qgallouedec
43
5
Open R1 OlympicCoder 7B GGUF
Apache-2.0
OlympicCoder-7B is a 7B-parameter large language model focused on code generation, based on open-r1/OlympicCoder-7B with llama.cpp quantization, supporting multiple quantization level options.
Large Language Model English
O
bartowski
5,859
9
Open R1 OlympicCoder 32B GGUF
Apache-2.0
Quantized version of OlympicCoder-32B, based on llama.cpp's imatrix quantization method, suitable for code generation tasks.
Large Language Model English
O
bartowski
12.60k
12
Kanana Nano 2.1b Base
Kanana is a series of bilingual large language models developed by Kakao, excelling in Korean tasks while maintaining competitiveness in English tasks. The 2.1b version is the lightweight base model of this series.
Large Language Model
Transformers Supports Multiple Languages

K
kakaocorp
4,039
33
Yulan Mini Instruct
MIT
YuLan-Mini-Instruct is a compact yet powerful 2.4-billion-parameter text generation model, specializing in mathematical and code reasoning tasks with support for both English and Chinese.
Large Language Model
Transformers Supports Multiple Languages

Y
yulan-team
97
2
Qwen2.5 Coder 0.5B Q8 0 GGUF
Apache-2.0
This is a GGUF format model converted from the Qwen2.5-Coder-0.5B model, suitable for code generation tasks.
Large Language Model Supports Multiple Languages
Q
ggml-org
943
5
Yi Coder 9B Chat
Apache-2.0
Yi-Coder is a series of open-source code language models that achieve state-of-the-art coding performance with fewer than 10 billion parameters.
Large Language Model
Transformers

Y
01-ai
2,247
202
Deepseek Coder V2 Lite Instruct FP8
Other
FP8 quantized version of DeepSeek-Coder-V2-Lite-Instruct, suitable for commercial and research use in English, optimized for inference efficiency.
Large Language Model
Transformers

D
RedHatAI
11.29k
7
Codestral 22B V0.1 Abliterated V3
Other
An orthogonalized version of Codestral-22B-v0.1, where the model's rejection capability has been removed through ablation techniques to better align with user requests.
Large Language Model
Transformers Other

C
failspy
1,344
11
Codeparrot Ds Distilgpt2
Apache-2.0
A code generation model fine-tuned based on distilgpt2, suitable for code-related tasks
Large Language Model
Transformers

C
zhuchi76
35
1
Phi 3 Mini 4k Instruct GGUF
MIT
Phi-3-Mini-4K-Instruct is a 3.8 billion parameter lightweight cutting-edge open-source model trained on the Phi-3 dataset, emphasizing high quality and inference-intensive characteristics.
Large Language Model
P
brittlewis12
170
1
Phi 3 Mini 128k Instruct
MIT
Phi-3 Mini 128K Instruct is a 3.8B parameter lightweight open-source model focused on reasoning capabilities, supporting 128K context length.
Large Language Model
Transformers Supports Multiple Languages

P
microsoft
399.68k
1,638
Snowflake Arctic Instruct
Apache-2.0
Arctic is a dense Mixture of Experts (MoE) architecture large language model developed by the Snowflake AI Research team, with 480 billion parameters, open-sourced under the Apache-2.0 license.
Large Language Model
Transformers

S
Snowflake
10.94k
354
Pygemma 2b Ultra Plus 4
Other
A Python programming assistant model fine-tuned based on google/gemma-2b, specializing in Python code generation and problem-solving
Large Language Model
Transformers English

P
Menouar
15
3
Codegemma 7b It
CodeGemma is a lightweight open-source collection of code models based on Gemma, specializing in code generation, completion, and conversational tasks.
Large Language Model
Transformers

C
google
3,286
217
Codellama 34b Hf
Code Llama is a series of code generation and understanding models released by Meta, ranging from 7 billion to 34 billion parameters. This version is the 34 billion parameter base model.
Large Language Model
Transformers Other

C
meta-llama
492
15
Codellama 13b Instruct Hf
Code Llama is a series of pre-trained generative text models released by Meta, focusing on code generation and understanding, with versions ranging from 7 billion to 34 billion parameters.
Large Language Model
Transformers Other

C
meta-llama
2,307
22
Codellama 13b Hf
Code Llama is a series of pretrained and fine-tuned generative text models developed by Meta, with parameter scales ranging from 7B to 34B, suitable for general code generation and understanding
Large Language Model
Transformers Other

C
meta-llama
482
17
Codellama 7b Instruct Hf
Code Llama is a series of code generation and comprehension models released by Meta, including pre-trained and fine-tuned versions with parameters ranging from 7B to 34B. This model is the 7B-parameter instruction fine-tuned version, specifically optimized for code assistant scenarios.
Large Language Model
Transformers Other

C
meta-llama
28.32k
48
Codellama 7b Hf
Code Llama is a series of code generation and understanding models from Meta with parameter scales ranging from 7B to 34B. This version is the 7B base model.
Large Language Model
Transformers Other

C
meta-llama
4,650
101
Hyperion 2.0 Mistral 7B
Apache-2.0
A multi-domain language model fine-tuned on the Hyperion-v2.0 dataset, excelling in scientific reasoning and complex task processing.
Large Language Model
Transformers Supports Multiple Languages

H
Locutusque
16
6
Starcoder2 3b Instruct
Openrail
A large language model fine-tuned based on the starcoder2-3b model, specializing in code generation tasks, achieving a score of 65.9 pass@1 in the HumanEval-Python test
Large Language Model
Transformers Other

S
TechxGenus
44
4
- 1
- 2
Featured Recommended AI Models