đ Genji-python 6B
Genji-python 6B is a transformer model designed to assist with Python code writing. It offers a convenient way to generate Python code, and you can easily use it through our Colab notebook.
đ Quick Start
For example usage or to easily use the model you can check our colab notebook:
Notebook
⨠Features
- Based on GPT - J 6B: Genji is a transformer model finetuned on EleutherAI's GPT - J 6B model.
- Python - specific Training: Trained on approximately 4GB of Python - only code.
- Split Checkpoints: The split model has checkpoints that reduce system RAM usage during loading and speed up the loading process.
đĻ Installation
This model is only usable with our fork because GPT - J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.
For now, you need to use this fork:
Fork
To install with pip:
pip install git+https://github.com/finetuneanon/transformers@gpt - neo - localattention3 - rp - b
git - lfs also needs to be installed, on ubuntu:
apt install git - lfs
After it's installed, initialize git - lfs:
git lfs install
Then clone this repo:
git clone https://huggingface.co/NovelAI/genji - python - 6B - split
đģ Usage Examples
Basic Usage
We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GPTNeoForCausalLM,
)
model = AutoModelForCausalLM.from_pretrained("genji - python - 6B - split/model").half().eval().cuda()
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt - neo - 2.7B")
text = '''def print_customer_name'''
tokens = tokenizer(text, return_tensors="pt").input_ids
generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id)
last_tokens = generated_tokens[0][len(tokens[0]):]
generated_text = tokenizer.decode(last_tokens)
print("Generation:\n" + generated_text)
When ran, this code generates:
Prompt:
def print_customer_name
Generation:
(self, customer):
"""Print the name of a customer."""
if not self.is_valid():
return
print("Customer: {}".format(customer))
đ Documentation
Model Description
Genji is a transformer model finetuned on EleutherAI's GPT - J 6B model. This particular model is trained on python only code approaching 4GB in size.
Split model has the checkpoints splitted, which makes it use less system RAM while loading and makes it faster to load.
This model needs more effort to set up as you need to install git - lfs and pull the repo.
Property |
Details |
n_parameters |
6,053,381,344 |
n_layers |
28* |
d_model |
4,096 |
d_ff |
16,384 |
n_heads |
16 |
d_head |
256 |
n_ctx |
2,048 |
n_vocab |
50,400 (same tokenizer as GPT - 2/3) |
position encoding |
Rotary position encodings (RoPE) |
RoPE dimensions |
[64](https://github.com/kingoflolz/mesh - transformer - jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
*
each layer consists of one feedforward block and one self - attention block
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT - 2/GPT - 3.
Training data
GPT - J 6B was pretrained on the Pile, a large - scale curated dataset created by EleutherAI for the purpose of training this model. After the pre - training, it's finetuned on the python code that was taken from the Pile.
Training procedure
Genji - python - 6B is trained for 20k steps on around 655 million tokens with a learning rate of 2e - 06
Intended Use
This model is trained for assistance on writing python code and having fun trying weird stuff with it.
đ§ Technical Details
The model's architecture and hyperparameters are carefully designed to optimize performance on Python code generation tasks. The use of Rotary position encodings (RoPE) and the specific layer and dimension settings contribute to its effectiveness.
đ License
This project is licensed under the [apache - 2.0](https://www.apache.org/licenses/LICENSE - 2.0) license.
Acknowledgements
This project was possible because of the compute provided by the
TPU Research Cloud and EleutherAI for pretraining of the GPT - J 6B.
Thanks to everyone who contributed to this project: