🚀 Llama 3 8B for Japanese
This repository contains a model that attempts to adapt Llama 3 for the Japanese language.

🚀 Quick Start
This repository hosts a model aiming to adapt Llama 3 for the Japanese language. Updated on April 23rd, it's recommended to download the latest version.
📄 License
It's under the Llama 3 License, allowing commercial use. However, please read the Llama 3 license carefully before use.
💻 Usage Examples
Basic Usage
If you want a quick try, use the demo. Another good option is Colab.
For local use, follow these steps:
First, install the libraries as follows:
pip install -U transformers accelerate
Then, run the following code:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("alfredplpl/Llama-3-8B-Instruct-Ja")
model = AutoModelForCausalLM.from_pretrained("alfredplpl/Llama-3-8B-Instruct-Ja", device_map="auto", torch_dtype=torch.bfloat16)
messages = [
{
'role': "system",
'content': "あなたは日本語で回答するAIアシスタントです。"
},
{
'role': "user",
'content': "猫と犬、どっちが好き?"
}
]
prompt=tokenizer.apply_chat_template(messages, tokenize=False)
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**input_ids,
max_new_tokens=128,
do_sample=True,
top_p=0.95,
temperature=0.2,
repetition_penalty=1.1,
eos_token_id=[
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
],
)
print(tokenizer.decode(outputs[0]))
You should get a result like this:
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
あなたは日本語で回答するAIアシスタントです。<|eot_id|><|start_header_id|>user<|end_header_id|>
猫と犬、どっちが好き?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
猫と犬の両方を飼っているので、どちらも好きだ!<|eot_id|>
🔧 Technical Details
Training Data
- llm-jp/databricks-dolly-15k-ja
- cl-nagoya/auto-wiki-qa
- meta-llama/Meta-Llama-3-8B-Instruct
Training Method
We performed 1 epoch of LoRA-based instruction tuning on meta-llama/Meta-Llama-3-8B-Instruct using approximately 2.4 million training samples from cl-nagoya/auto-wiki-qa and then merged the LoRA.
After that, we conducted 5 epochs of LoRA-based instruction tuning on the resulting model using llm-jp/databricks-dolly-15k-ja and merged the LoRA again.
All these trainings were done using supervised learning.
Hardware
Software
Training Time
Property |
Details |
Model Type |
Llama 3 8B for Japanese |
Training Data |
llm-jp/databricks-dolly-15k-ja, cl-nagoya/auto-wiki-qa, meta-llama/Meta-Llama-3-8B-Instruct |