Model Overview
Model Features
Model Capabilities
Use Cases
đ LongLoRA and LongAlpaca for Long-context LLMs
LongLoRA and LongAlpaca are designed to enable efficient fine - tuning of long - context large language models, offering various models and a long - context instruction - following dataset.
For detailed usage and codes, please visit the Github project.
đ Quick Start
To quickly get started with LongLoRA and LongAlpaca:
- Fork this repo on github
- Clone the repository on your local machine, using
git clone
and pasting the url of this project. - Run the following code:
pip install -r requirements.txt
pip install flash-attn --no-build-isolation
- Use either a Released model or Fine tune a model to fit your preferences.
- Test your model by chat.
- Deploy your own demo.
⨠Features
- In the LongLoRA approach, the proposed shifted short attention is easy to implement, compatible with Flash - Attention, and is not required during inference.
- We released all our models, including models from 7B to 70B, context length from 8k to 100k, such as LLaMA2 - LongLoRA - 7B - 100k, LLaMA2 - LongLoRA - 13B - 64k, and LLaMA2 - LongLoRA - 70B - 32k.
- We built up a long - context instruction - following dataset, LongAlpaca - 12k. We released the corresponding LongAlpaca - 7B, LongAlpaca - 13B and LongAlpaca - 70B models. To our best knowledge, this is the first open - sourced long - context 70B model.
đ¤ How to Contribute
- Make sure to have
git
installed. - Create your own fork of the project.
- Clone the repository on your local machine, using
git clone
and pasting the url of this project. - Read both the
Usage Requirements
andInstallation and Quick Guide
sections below. - Commit and push your changes.
- Make a pull request when finished modifying the project.
đĻ Installation
Usage Requirements
To download and use the pre - trained weights you will need:
- A Hugging Face (HF) account with a valid email. Note, the email used for HF must also be used for the license agreement.
- Accept the Meta license and acceptable use policy
Installation Steps
To install and run the application:
- Fork this repo on github
- Clone the repository on your local machine, using
git clone
and pasting the url of this project. - Run the following code:
pip install -r requirements.txt
pip install flash-attn --no-build-isolation
đģ Usage Examples
Basic Usage
After installation, you can use a released model or fine - tune a model according to your needs. For example, to fine - tune a model:
torchrun --nproc_per_node=8 fine-tune.py \
--model_name_or_path path_to/Llama-2-7b-hf \
--bf16 True \
--output_dir path_to_saving_checkpoints \
--cache_dir path_to_cache \
--model_max_length 8192 \
--use_flash_attn True \
--low_rank_training False \
--num_train_epochs 1 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 1000 \
--save_total_limit 2 \
--learning_rate 2e-5 \
--weight_decay 0.0 \
--warmup_steps 20 \
--lr_scheduler_type "constant_with_warmup" \
--logging_steps 1 \
--deepspeed "ds_configs/stage2.json" \
--tf32 True \
--max_steps 1000
- Please remember to change
path_to/Llama-2-7b-hf
,path_to_saving_checkpoints
,path_to_cache
to your own directory. - Note that you can change
model_max_length
to other values. - You could change
ds_configs/stage2.json
tods_configs/stage3.json
if you want. - Please set
use_flash_attn
asFalse
if you use V100 machines or do not install flash attention. - You can set
low_rank_training
asFalse
if you want to use fully fine - tuning. It will cost more GPU memory and be slower, but the performance will be better.
đ Documentation
LongAlpaca Data
LongAlpaca - 12k contains 9k long QA data that we collected and 3k short QA sampled from the original Alpaca data. This is to avoid the case that the model might degrade at short instruction following. The data we collect contains various types and amounts as shown in the following table:
Data | Short QA | Long QA | Total | Download |
---|---|---|---|---|
LongAlpaca-12k | 3k | 9k | 12k | Link |
Following the original Alpaca format, our Long QA data uses the following prompts for fine - tuning:
instruction
:str
, describes the task the model should perform. For example, to answer a question after reading a book section or paper. We vary the contents and questions to make instructions diverse.output
:str
, the answer to the instruction.
We did not use the input
format in the Alpaca format for simplicity.
Models
Models with supervised fine - tuning
Model | Size | Context | Train | Link |
---|---|---|---|---|
LongAlpaca-7B | 7B | 32768 | Full FT | Model |
LongAlpaca-13B | 13B | 32768 | Full FT | Model |
LongAlpaca-70B | 70B | 32768 | LoRA+ | Model (LoRA - weight) |
Models with context extension via fully fine - tuning
Model | Size | Context | Train | Link |
---|---|---|---|---|
Llama-2-7b-longlora-8k-ft | 7B | 8192 | Full FT | Model |
Llama-2-7b-longlora-16k-ft | 7B | 16384 | Full FT | Model |
Llama-2-7b-longlora-32k-ft | 7B | 32768 | Full FT | Model |
Llama-2-7b-longlora-100k-ft | 7B | 100000 | Full FT | Model |
Llama-2-13b-longlora-8k-ft | 13B | 8192 | Full FT | Model |
Llama-2-13b-longlora-16k-ft | 13B | 16384 | Full FT | Model |
Llama-2-13b-longlora-32k-ft | 13B | 32768 | Full FT | Model |
Models with context extension via improved LoRA fine - tuning
Model | Size | Context | Train | Link |
---|---|---|---|---|
Llama-2-7b-longlora-8k | 7B | 8192 | LoRA+ | LoRA - weight |
Llama-2-7b-longlora-16k | 7B | 16384 | LoRA+ | LoRA - weight |
Llama-2-7b-longlora-32k | 7B | 32768 | LoRA+ | LoRA - weight |
Llama-2-13b-longlora-8k | 13B | 8192 | LoRA+ | LoRA - weight |
Llama-2-13b-longlora-16k | 13B | 16384 | LoRA+ | LoRA - weight |
Llama-2-13b-longlora-32k | 13B | 32768 | LoRA+ | LoRA - weight |
Llama-2-13b-longlora-64k | 13B | 65536 | LoRA+ | LoRA - weight |
Llama-2-70b-longlora-32k | 70B | 32768 | LoRA+ | LoRA - weight |
Llama-2-70b-chat-longlora-32k | 70B | 32768 | LoRA+ | LoRA - weight |
Training
Pre - trained weights
We use LLaMA2 models as the pre - trained weights and fine - tune them to long context window sizes. Download based on your choices.
Pre - trained weights |
---|
Llama-2-7b-hf |
Llama-2-13b-hf |
Llama-2-70b-hf |
Llama-2-7b-chat-hf |
Llama-2-13b-chat-hf |
Llama-2-70b-chat-hf |
This project also supports GPTNeoX models as the base model architecture. Some candidate pre - trained weights may include GPT - NeoX - 20B, Polyglot - ko - 12.8B and other variants.
Fine - tuning
torchrun --nproc_per_node=8 fine-tune.py \
--model_name_or_path path_to/Llama-2-7b-hf \
--bf16 True \
--output_dir path_to_saving_checkpoints \
--cache_dir path_to_cache \
--model_max_length 8192 \
--use_flash_attn True \
--low_rank_training False \
--num_train_epochs 1 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 1000 \
--save_total_limit 2 \
--learning_rate 2e-5 \
--weight_decay 0.0 \
--warmup_steps 20 \
--lr_scheduler_type "constant_with_warmup" \
--logging_steps 1 \
--deepspeed "ds_configs/stage2.json" \
--tf32 True \
--max_steps 1000
- Please remember to change
path_to/Llama-2-7b-hf
,path_to_saving_checkpoints
,path_to_cache
to your own directory. - Note that you can change
model_max_length
to other values. - You could change
ds_configs/stage2.json
tods_configs/stage3.json
if you want. - Please set
use_flash_attn
asFalse
if you use V100 machines or do not install flash attention. - You can set
low_rank_training
asFalse
if you want to use fully fine - tuning. It will cost more GPU memory and be slower, but the performance will be better.
đ§ Technical Details
In the LongLoRA approach, the shifted short attention mechanism is a key technical point. It is easy to implement and is compatible with Flash - Attention. During the fine - tuning process, different training methods (such as full fine - tuning and LoRA +) are used for different models to achieve different context lengths. The data in LongAlpaca - 12k is carefully collected and processed to ensure the model's performance on both long and short instructions.
đ License
- Code License: Apache 2.0
- Data License: CC By NC 4.0
- Weight License: CC By NC 4.0
News
- [x] [2023.10.8] We release the long instruction - following dataset, LongAlpaca - 12k and the corresponding models, LongAlpaca - 7B, LongAlpaca - 13B, and LongAlpaca - 70B.
- (The previous sft models, Llama - 2 - 13b - chat - longlora - 32k - sft and Llama - 2 - 70b - chat - longlora - 32k - sft, have been depreciated.)
- [x] [2023.10.3] We add support for GPTNeoX models. Please refer to this PR for usage. Thanks for @naubull2 for this contribution.
- [x] [2023.9.22] We release all our fine - tuned models, including 70B - 32k models, LLaMA2 - LongLoRA - 70B - 32k, LLaMA2 - LongLoRA - 7B - 100k. Welcome to check them out!
- [x] [2023.9.22] We release Paper and this GitHub repo, including training and evaluation code.
LongLoRA: Efficient Fine - tuning of Long - Context Large Language Models [Paper]
Yukang Chen,
Shengju Qian,
Haotian Tang,
Xin Lai,
Zhijian Liu,
Song Han,
Jiaya Jia

