🚀 Llama 3 8b特殊令牌权重优化项目
本项目针对原始Llama 3 8b(基础)模型,解决了特殊令牌权重为零可能导致NaN梯度的问题。通过重新初始化特定特殊令牌的权重,有效缓解了这一问题,提升了模型的稳定性。
🚀 快速开始
原始的Llama 3 8b(基础)模型中,特殊令牌的权重为零,这可能会导致NaN梯度。此版本重新初始化了以下所有特殊令牌的权重,以缓解该问题:
<|eot_id|>
<|start_header_id|>
<|end_header_id|>
我们将这些令牌在embed
和lm_head
中的权重设置为所有其他令牌的均值。
💻 使用示例
基础用法
import argparse
import transformers
import torch
def init_eot_embedding_llama3(model_path, output_dir, special_tokens=["<|eot_id|>", "<|start_header_id|>", "<|end_header_id|>"], mean_cutoff=128000, dtype=torch.bfloat16):
tokenizer = transformers.AutoTokenizer.from_pretrained(model_path)
model = transformers.AutoModelForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, torch_dtype=dtype)
assert model.model.embed_tokens.weight.shape[0] >= mean_cutoff
assert model.lm_head.weight.shape[0] >= mean_cutoff
with torch.no_grad():
for token in special_tokens:
token_id = tokenizer.convert_tokens_to_ids(token)
print (f"Token {token} ID {token_id}")
model.model.embed_tokens.weight[token_id] = torch.mean(model.model.embed_tokens.weight[:mean_cutoff].to(torch.float32), dim=0).to(dtype)
model.lm_head.weight[token_id] = torch.mean(model.lm_head.weight[:mean_cutoff].to(torch.float32), dim=0).to(dtype)
tokenizer.save_pretrained(output_dir)
model.save_pretrained(output_dir)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"--model-path",
help="Location of model, or HuggingFace repo ID",
)
parser.add_argument(
"--output-dir",
help="Location to write resulting model and tokenizer",
)
init_eot_embedding_llama3(**vars(parser.parse_args()))
if __name__ == "__main__":
main()
📄 许可证
本项目采用llama3
许可证,详细信息请查看LICENSE。