🚀 llm-jp-13b-v1.0
This repository offers large language models developed by LLM-jp, a collaborative project launched in Japan, aiming to provide advanced language processing capabilities.
🚀 Quick Start
This repository provides access to large - scale language models developed by the LLM - jp project. Below are the available model variants and the necessary steps for usage.
✨ Features
- Multiple Model Variants: Offers both instruction models and pre - trained models to meet different application scenarios.
- Open - Source License: Released under the Apache License 2.0, facilitating wide - spread use and development.
📦 Installation
To use these models, you need to install the following libraries with the specified versions:
- torch>=2.0.0
- transformers>=4.34.0
- tokenizers>=0.14.0
- accelerate==0.23.0
You can install them using pip
:
pip install torch>=2.0.0 transformers>=4.34.0 tokenizers>=0.14.0 accelerate==0.23.0
💻 Usage Examples
Basic Usage
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-v1.0")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-v1.0", device_map="auto", torch_dtype=torch.float16)
text = "自然言語処理とは何か"
tokenized_input = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
)[0]
print(tokenizer.decode(output))
📚 Documentation
Model Details
- Model type: Transformer - based Language Model
- Total seen tokens: 300B
Model |
Params |
Layers |
Hidden size |
Heads |
Context length |
13b model |
13b |
40 |
5120 |
40 |
2048 |
1.3b model |
1.3b |
24 |
2048 |
16 |
2048 |
Training
- Pre - training:
- Hardware: 96 A100 40GB GPUs (mdx cluster)
- Software: Megatron - DeepSpeed
- Instruction tuning:
Tokenizer
The tokenizer of this model is based on huggingface/tokenizers Unigram byte - fallback model.
- Model: Hugging Face Fast Tokenizer using Unigram byte - fallback model which requires
tokenizers>=0.14.0
- Training algorithm: SentencePiece Unigram byte - fallback
- Training data: A subset of the datasets for model pre - training
- Vocabulary size: 50,570 (mixed vocabulary of Japanese, English, and source code)
Datasets
Pre - training
The models have been pre - trained using a blend of the following datasets:
The pre - training was continuously conducted using a total of 10 folds of non - overlapping data, each consisting of approximately 27 - 28B tokens. The pre - training was finalized with additional (potentially) high - quality 27B tokens data from the same source datasets.
Instruction tuning
The models have been fine - tuned on the following datasets:
Evaluation
You can view the evaluation results of several LLMs on this leaderboard. We used llm - jp - eval for the evaluation.
🔧 Technical Details
The models in this repository are developed based on the Transformer architecture. During pre - training, large - scale data from multiple languages and code sources are used to learn general language knowledge. Instruction tuning further fine - tunes the models on specific datasets to improve their performance on instruction - following tasks.
📄 License
This project is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE - 2.0).
Model Card Authors
The names are listed in alphabetical order.
Hirokazu Kiyomaru, Hiroshi Matsuda, Jun Suzuki, Namgi Han, Saku Sugawara, Shota Sasaki, Shuhei Kurita, Taishi Nakamura, Takumi Okamoto.
⚠️ Important Note
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
💡 Usage Tip
If you have any questions, please send them to llm - jp(at)nii.ac.jp.