# Code generation

Diffucoder 7B Cpgrpo 8bit
DiffuCoder-7B-cpGRPO-8bit is a code generation model converted to MLX format, based on apple/DiffuCoder-7B-cpGRPO, and is specifically designed to provide developers with an efficient code generation tool.
Large Language Model Other
D
mlx-community
272
2
Diffucoder 7B Cpgrpo 4bit
DiffuCoder-7B-cpGRPO-4bit is a 4-bit quantized version converted from the Apple DiffuCoder-7B-cpGRPO model, optimized for the MLX framework.
Large Language Model Other
D
mlx-community
218
1
Openthinker3 7B GGUF
Apache-2.0
OpenThinker3-7B-GGUF is a quantized version of open-thoughts/OpenThinker3-7B, optimized for efficient inference. It is fine-tuned based on Qwen/Qwen2.5-7B-Instruct and performs excellently on mathematical, code, and scientific problems.
Large Language Model Transformers
O
QuantFactory
114
2
Dsi Transformers Code T5 Base Python
Apache-2.0
A code processing model fine-tuned based on Salesforce/codet5-base, focusing on Python code-related tasks
Large Language Model Transformers
D
ngocnamk3er
342
1
Seed Coder 8B Instruct GGUF
MIT
Seed-Coder-8B-Instruct is a powerful open-source code model with features such as model-centricity, transparency, and high performance, and it performs excellently in various coding tasks.
Large Language Model Transformers
S
unsloth
3,391
1
Spec T1 RL 7B
MIT
Spec-T1-RL-7B is a high-precision large language model focused on mathematical reasoning, algorithm problem-solving, and code generation, and it performs excellently in technical benchmark tests.
Large Language Model Safetensors English
S
SVECTOR-CORPORATION
4,626
6
Phi 4 Mini Reasoning MLX 4bit
MIT
This is a 4-bit quantized version in MLX format converted from the Microsoft Phi-4-mini-reasoning model, suitable for text generation tasks.
Large Language Model
P
lmstudio-community
72.19k
2
Phi 4 Reasoning GGUF
MIT
Phi-4-reasoning is an advanced reasoning model fine-tuned based on Phi-4. Through supervised fine-tuning and reinforcement learning, it demonstrates excellent reasoning abilities in fields such as mathematics, science, and coding.
Large Language Model Transformers
P
unsloth
6,046
7
Olympiccoder 32B GGUF
Apache-2.0
OlympicCoder-32B is a code generation model based on Qwen2.5-Coder-32B-Instruct, employing IQ-DynamicGate ultra-low-bit quantization technology for efficient inference in memory-constrained environments.
Large Language Model English
O
Mungert
361
3
Theta 35
Apache-2.0
Theta - 35 is an advanced inference model in the Theta series launched by SVECTOR, focusing on complex thinking and reasoning, and excels in difficult problems that require in - depth logical analysis and multi - step reasoning.
Large Language Model Transformers English
T
SVECTOR-CORPORATION
10.44k
5
Deepseek R1 Bf16
MIT
DeepSeek-R1 is the first-generation inference model, which performs excellently in mathematics, code, and reasoning tasks, and its performance is comparable to that of OpenAI-o1.
Large Language Model Transformers
D
opensourcerelease
1,486
16
Granite 8b Code Instruct 128k GGUF
Apache-2.0
IBM Granite 8B code instruction model, supporting a context length of 128k, focusing on code generation and instruction understanding tasks.
Large Language Model Transformers Other
G
tensorblock
186
1
Qwen2.5 Coder 1.5B GGUF
Apache-2.0
Qwen2.5-Coder-1.5B is a code generation model with 1.5B parameters, supporting multiple programming languages and suitable for code completion and generation tasks.
Large Language Model Transformers Supports Multiple Languages
Q
tensorblock
162
1
Qwen2.5 Coder 3B Instruct GGUF
Other
Based on the Qwen2.5-Coder-3B-Instruct model, quantization processing is performed, providing an efficient and convenient solution for code generation and dialogue interaction.
Large Language Model Transformers Supports Multiple Languages
Q
gaianet
1,784
2
Llm Jp 3 1.8b
Apache-2.0
A large language model developed by the National Institute of Informatics in Japan, supporting multiple languages such as Japanese and English, and suitable for natural language processing tasks.
Large Language Model Transformers Supports Multiple Languages
L
llm-jp
1,378
14
Llama 3.1 8b ITA
Italian language-optimized large language model based on Meta-Llama-3.1-8B-Instruct
Large Language Model Transformers Supports Multiple Languages
L
DeepMount00
6,719
11
Cere Llama 3.1 8B Tr
A fine-tuned version of the Llama3.1 8B large language model optimized for Turkish, trained on high-quality Turkish instruction datasets
Large Language Model Transformers Other
C
CerebrumTech
41
3
Mistral Nemo Base 2407 Chatml
Apache-2.0
Mistral-Nemo-Base-2407 is a 12-billion-parameter generative text pre-training model jointly trained by Mistral AI and NVIDIA, outperforming models of similar or smaller scale.
Large Language Model Transformers Supports Multiple Languages
M
IntervitensInc
191
3
Meta Llama 3 8B Instruct Hf AWQ
Other
Meta Llama 3 series large language model, featuring an 8 billion parameter instruction fine-tuned text generation model optimized for dialogue scenarios.
Large Language Model Transformers
M
solidrust
848
9
Dbrx Base
Other
A Mixture of Experts (MoE) large language model developed by Databricks, with 132 billion total parameters and 36 billion active parameters, supporting a 32K context window
Large Language Model Transformers
D
databricks
100
557
Croissantllmbase GGUF
MIT
CroissantLLM is a 1.3B-parameter language model trained on 3T English-French bilingual tokens, designed for research and industrial applications, capable of running smoothly on consumer-grade hardware.
Large Language Model Supports Multiple Languages
C
croissantllm
57
4
Codellama 70B Instruct GGUF
CodeLlama 70B Instruct is a large-scale code generation model based on the Llama 2 architecture, specifically optimized for code understanding and generation tasks.
Large Language Model Other
C
TheBloke
2,703
57
Codellama 70b Python Hf
Code Llama is a 70B-parameter Python-specialized code generation model developed by Meta, optimized based on the Llama-2 architecture with support for 16k context length
Large Language Model Transformers Other
C
codellama
115
108
Piccolo Math 2x7b
MIT
Piccolo-math-2x7b is a large language model specializing in mathematical and logical reasoning, named in honor of the author's pet dog Klaus. The model demonstrates outstanding performance across multiple benchmarks, particularly in mathematical and code generation tasks.
Large Language Model Transformers
P
macadeliccc
87
2
Tinyllama 1.1B 32k
Apache-2.0
A 32k-context fine-tuned version based on TinyLlama-1.1B, achieving long-context processing capability by increasing rope theta
Large Language Model Transformers English
T
Doctor-Shotgun
51
29
ARIA 70B V2 GGUF
ARIA 70B V2 is a large-scale language model based on the Llama 2 architecture, supporting French and English, with a focus on text generation tasks.
Large Language Model Supports Multiple Languages
A
TheBloke
1,100
3
Wizardcoder Python 13B V1.0 GPTQ
WizardCoder Python 13B V1.0 is a large language model developed by WizardLM, focusing on Python code generation tasks. It is based on the llama2 architecture and performs excellently in the HumanEval benchmark test.
Large Language Model Transformers
W
TheBloke
309
76
Replit
replit-code-v1-3b is a 2.7B parameter causal language model focused on code completion, developed by Replit, Inc.
Large Language Model Transformers Other
R
lentan
60
3
Godot Dodo 4x 60k Llama 7b
Instruction-following model fine-tuned on LLaMA-7B, specifically optimized for code instruction scenarios
Large Language Model Transformers
G
minosu
39
4
Code Trans T5 Small Api Generation
Pre-trained API recommendation generation model based on T5 Small architecture, specifically designed for Java programming tasks
Large Language Model Transformers
C
SEBIS
15
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase