🚀 Phi 2 人物角色聊天模型
Phi 2 人物角色聊天模型是基于基础 Phi 2 模型,使用 nazlicanto/persona-based-chat 数据集进行 LoRA 微调后的版本。该数据集包含超过 64000 段 人物 A 和 人物 B 之间的对话,并提供了一系列人物角色事实信息。
此模型使用监督微调训练器进行训练,以 reference
回复作为目标输出。关于训练和推理代码以及完整的依赖项列表,你可以参考 Github 仓库。
🚀 快速开始
运行模型
请注意,目前运行 Phi 2 模型需要设置 trust_remote_code=True
。为获得最佳效果,请使用包含人物角色事实信息的提示,并至少包含一轮对话。
from random import randrange
import torch
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
prompt = f"""
Person B has the following Persona information.
Persona of Person B: My name is David and I'm a 35 year old math teacher.
Persona of Person B: I like to hike and spend time in the nature.
Persona of Person B: I'm married with two kids.
Instruct: Person A and Person B are now having a conversation. Following the conversation below, write a response that Person B would say base on the above Persona information. Please carefully consider the flow and context of the conversation below, and use the Person B's Persona information appropriately to generate a response that you think are the most appropriate replying for Person B.
Persona A: Morning! I think I saw you at the parent meeting, what's your name?
Output:
"""
model = AutoModelForCausalLM.from_pretrained("nazlicanto/phi-2-persona-chat", trust_remote_code=True)
model.to("cuda")
tokenizer = AutoTokenizer.from_pretrained("nazlicanto/phi-2-persona-chat", trust_remote_code=True)
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
with torch.inference_mode():
outputs = model.generate(
input_ids=input_ids,
max_new_tokens=50,
do_sample=True,
top_p=0.1,
temperature=0.7
)
outputs = outputs.detach().cpu().numpy()
outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)
output = outputs[0][len(prompt):]
print(output)
此模型由 nazlicanto 和 adirik 训练。
📄 许可证
本项目采用 MIT 许可证。