đ Seed-Coder-8B-Reasoning
Seed-Coder-8B-Reasoning is a powerful open - source code model. It features high - performance in coding tasks, with a focus on reasoning capabilities. It also has a model - centric and transparent data pipeline.
đ Quick Start
Prerequisites
You will need to install the latest versions of transformers
and accelerate
:
pip install -U transformers accelerate
Example Code
Here is a simple example demonstrating how to load the model and perform code generation using the Hugging Face pipeline
API:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "ByteDance-Seed/Seed-Coder-8B-Reasoning"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
messages = [
{"role": "user", "content": "Write a quick sort algorithm."},
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
return_tensors="pt",
add_generation_prompt=True,
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=16384)
response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
print(response)
⨠Features
Overall Features of Seed - Coder
We are thrilled to introduce Seed - Coder, a powerful, transparent, and parameter - efficient family of open - source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed - Coder contributes to promote the evolution of open code models through the following highlights:
- Model - centric: Seed - Coder predominantly leverages LLMs instead of hand - crafted rules for code data filtering, minimizing manual effort in pretraining data construction.
- Transparent: We openly share detailed insights into our model - centric data pipeline, including methods for curating GitHub data, commits data, and code - related web data.
- Powerful: Seed - Coder achieves state - of - the - art performance among open - source models of comparable size across a diverse range of coding tasks.
Features of Seed - Coder - 8B - Reasoning
This repo contains the Seed - Coder - 8B - Reasoning model, which has the following features:
- Type: Causal language models
- Training Stage: Pretraining & Post - training
- Data Source: Public datasets
- Context Length: 65,536
đĻ Installation
No specific installation steps other than the prerequisite library installation are provided in the original document.
đģ Usage Examples
Basic Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "ByteDance-Seed/Seed-Coder-8B-Reasoning"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
messages = [
{"role": "user", "content": "Write a quick sort algorithm."},
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
return_tensors="pt",
add_generation_prompt=True,
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=16384)
response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
print(response)
đ Documentation
Model Downloads
Model Name |
Length |
Download |
Notes |
Seed - Coder - 8B - Base |
32K |
đ [Model](https://huggingface.co/ByteDance - Seed/Seed - Coder - 8B - Base) |
Pretrained on our model - centric code data. |
Seed - Coder - 8B - Instruct |
32K |
đ [Model](https://huggingface.co/ByteDance - Seed/Seed - Coder - 8B - Instruct) |
Instruction - tuned for alignment with user intent. |
đ Seed - Coder - 8B - Reasoning |
64K |
đ [Model](https://huggingface.co/ByteDance - Seed/Seed - Coder - 8B - Reasoning) |
RL trained to boost reasoning capabilities. |
Seed - Coder - 8B - Reasoning - bf16 |
64K |
đ [Model](https://huggingface.co/ByteDance - Seed/Seed - Coder - 8B - Reasoning - bf16) |
RL trained to boost reasoning capabilities. |
Evaluation
Seed - Coder - 8B - Reasoning strikes impressive performance on competitive programming, demonstrating that smaller LLMs can also be competent on complex reasoning tasks. Our model surpasses QwQ - 32B and DeepSeek - R1 on IOI'2024, and achieves an ELO rating comparable to o1 - mini on Codeforces contests.
For detailed benchmark performance, please refer to our [đ Technical Report](https://github.com/ByteDance - Seed/Seed - Coder/blob/master/Seed - Coder.pdf).
Information Table
Property |
Details |
Model Type |
Causal language models |
Training Data |
Public datasets |
Training Stage |
Pretraining & Post - training |
Context Length |
65,536 |
đ License
This project is licensed under the MIT License. See the [LICENSE file](https://github.com/ByteDance - Seed/Seed - Coder/blob/master/LICENSE) for details.