🚀 Kexer models
Kexer models are a collection of open - source generative text models. They are fine - tuned on the Kotlin Exercices dataset. This repository stores the fine - tuned Deepseek - coder - 1.3b model in the Hugging Face Transformers format, offering enhanced performance in text generation tasks.
🚀 Quick Start
💻 Usage Examples
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'JetBrains/deepseek-coder-1.3B-kexer'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to('cuda')
input_text = """\
This function takes an integer n and returns factorial of a number:
fun factorial(n: Int): Int {\
"""
input_ids = tokenizer.encode(
input_text, return_tensors='pt'
).to('cuda')
output = model.generate(
input_ids, max_length=60, num_return_sequences=1,
early_stopping=True, pad_token_id=tokenizer.eos_token_id,
)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
Advanced Usage
As with the base model, we can use FIM. To do this, the following format must be used:
'<|fim▁begin|>' + prefix + '<|fim▁hole|>' + suffix + '<|fim▁end|>'
📚 Documentation
🔧 Technical Details
Training setup
The model was trained on one A100 GPU with following hyperparameters:
Property |
Details |
warmup |
10% |
max_lr |
1e - 4 |
scheduler |
linear |
total_batch_size |
256 (~130K tokens per step) |
num_epochs |
4 |
More details about fine - tuning can be found in the technical report (coming soon!).
Fine - tuning data
For tuning this model, we used 15K examples from the synthetically generated Kotlin Exercices dataset. Every example follows the HumanEval format. In total, the dataset contains about 3.5M tokens.
Evaluation
For evaluation, we used the Kotlin HumanEval dataset, which contains all 161 tasks from HumanEval translated into Kotlin by human experts. You can find more details about the pre - processing necessary to obtain our results, including the code for running, on the datasets's page.
Here are the results of our evaluation:
Model name |
Kotlin HumanEval Pass Rate |
Deepseek - coder - 1.3B |
26.71 |
Deepseek - coder - 1.3B - Kexer |
36.65 |
📄 License
The model is licensed under the Apache - 2.0 license.
⚠️ Important Note
Deepseek - coder - 1.3B - Kexer is a new technology that carries risks with use. The testing conducted to date has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Deepseek - coder - 1.3B - Kexer's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. The model was fine - tuned on a specific data format (Kotlin tasks), and deviation from this format can also lead to inaccurate or undesirable responses to user queries. Therefore, before deploying any applications of Deepseek - coder - 1.3B - Kexer, developers should perform safety testing and tuning tailored to their specific applications of the model.