# Code generation optimization

Acereason Nemotron 14B GGUF
AceReason-Nemotron-14B is a mathematical and code reasoning model trained through reinforcement learning, which performs excellently in multiple mathematical and code reasoning benchmark tests.
Large Language Model Transformers
A
QuantFactory
326
2
Acereason Nemotron 7B GGUF
AceReason-Nemotron-7B is a mathematical and code reasoning model trained based on reinforcement learning. It starts training from DeepSeek-R1-Distilled-Qwen-7B and performs excellently in multiple benchmark tests.
Large Language Model Transformers
A
QuantFactory
326
2
The Teacher
A language model fine-tuned based on Qwen3-1.7B, which improves mathematical reasoning ability through reinforcement learning technology
Large Language Model Safetensors English
T
shiviktech
824
0
Murai 350M V0.1 Beta
Apache-2.0
This is a text generation model built based on the transformers library, with an efficient parameter architecture and excellent text generation capabilities.
Large Language Model Transformers
M
DeepMount00
140
1
Devstral Small 2505.w4a16 Gptq
Apache-2.0
This is a 4-bit GPTQ quantized version based on the mistralai/Devstral-Small-2505 model, optimized for consumer-grade hardware.
Large Language Model Safetensors
D
mratsim
557
1
Marin 8b Instruct
Apache-2.0
Marin 8B is an open-source large language model with a scale of 8B parameters, developed based on the Llama architecture and supports English text generation tasks.
Large Language Model Safetensors English
M
marin-community
239
1
Seed Coder Triton 8b V1
MIT
A large language model fine-tuned on a specific dataset based on the ByteDance-Seed/Seed-Coder-8B-Base model, supporting long sequence input and efficient training strategies.
Large Language Model Transformers
S
winglian
1,388
1
Seed Coder 8B Reasoning Bf16
MIT
Seed-Coder is an 8B-scale open-source code model family, including base, instruction, and reasoning versions. The reasoning version enhances reasoning capabilities through reinforcement learning training and supports 64K context length.
Large Language Model Transformers
S
ByteDance-Seed
4,382
9
Andrewzh Absolute Zero Reasoner Coder 14b GGUF
Based on andrewzh's Absolute_Zero_Reasoner-Coder-14b model, this is a version quantized using llama.cpp's imatrix, suitable for reasoning and code generation tasks.
Large Language Model
A
bartowski
1,995
5
Olympiccoder 7B GGUF
Apache-2.0
OlympicCoder-7B is a code generation model optimized based on Qwen2.5-Coder-7B-Instruct. It uses the IQ-DynamicGate ultra-low bit quantization technology and is designed for memory-constrained environments.
Large Language Model English
O
Mungert
849
3
Deepcoder 14B Preview GGUF
MIT
Ultra-low-bit quantization (1-2 bits) model using IQ-DynamicGate technology, suitable for memory-constrained devices and edge computing scenarios
Large Language Model English
D
Mungert
1,764
6
Pocketdoc Dans PersonalityEngine V1.2.0 24b GGUF
Apache-2.0
Llamacpp imatrix quantized version based on PocketDoc/Dans-PersonalityEngine-V1.2.0-24b, supporting multiple quantization options, suitable for text generation tasks.
Large Language Model Supports Multiple Languages
P
bartowski
16.73k
23
Dolphin3.0 Llama3.2 3B GGUF
A 3B-parameter large language model based on the Llama3.2 architecture, supporting English text generation tasks, quantized using llama.cpp with imatrix
Large Language Model English
D
bartowski
5,665
15
Qwen2.5 Coder 14B Instruct Abliterated GGUF
Apache-2.0
A quantized version of Qwen2.5-Coder-14B-Instruct-abliterated, supporting multiple quantization types and suitable for different hardware conditions.
Large Language Model
Q
bartowski
1,240
12
Granite 3.0 3b A800m Instruct
Apache-2.0
A 3-billion parameter instruction-tuned language model developed by IBM, based on Granite-3.0 architecture, supporting multilingual tasks and commercial applications
Large Language Model Transformers
G
ibm-granite
5,240
18
Yi Coder 1.5B Chat
Apache-2.0
Yi-Coder-1.5B is an open-source code language model with 1.5 billion parameters, supporting 52 programming languages and featuring 128K tokens of long-context understanding capability.
Large Language Model Transformers
Y
01-ai
295
34
Openhermes Llama 3B
Apache-2.0
An instruction-following model fine-tuned based on OpenLlama-3B, optimized for role-playing, instruction following, and code generation
Large Language Model Transformers English
O
cfahlgren1
81
3
Phi 2 GGUF
Other
Phi-2 is a small yet powerful language model developed by Microsoft, featuring 2.7 billion parameters, focusing on efficient inference and high-quality text generation.
Large Language Model Supports Multiple Languages
P
TheBloke
41.5M
205
Phi 1 5
MIT
Phi-1.5 is a 1.3 billion parameter natural language processing model focused on text generation and code generation tasks, excelling in commonsense understanding and logical reasoning.
Large Language Model Transformers Supports Multiple Languages
P
microsoft
111.94k
1,330
Llama 2 7b Chat Hf Function Calling V2
Llama 2 is a 7B-parameter dialogue-optimized large language model developed by Meta. This version extends function calling capabilities, supporting structured JSON format responses.
Large Language Model English
L
Trelis
175
136
Gpt Neo 1.3B Apps All
MIT
A code generation model fine-tuned on the APPS dataset based on GPT-Neo-1.3B, optimized for solving programming tasks
Large Language Model
G
flax-community
16
3
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase