🚀 GRM-Llama3.1_8B_rewardmodel-ft奖励模型
本奖励模型基于GRM-Llama3.1-8B-sftreg模型,使用经过净化处理的Skywork偏好数据集v0.2进行微调,在reward-bench上取得了92.6的评分。该模型可用于文本分类任务,为相关研究和应用提供了有力支持。
🚀 快速开始
本奖励模型基于GRM-Llama3.1-8B-sftreg模型微调而来,使用了经过净化处理的Skywork偏好数据集v0.2。你可以通过以下链接查看我们的GRM系列模型、相关论文以及GitHub代码仓库:
✨ 主要特性
- 高性能:在reward-bench上取得了92.6的评分,展现出优秀的性能。
- 数据优质:使用经过净化处理的Skywork偏好数据集v0.2进行微调,数据质量有保障。
📦 安装指南
文档中未提及具体安装步骤,故跳过该章节。
💻 使用示例
基础用法
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
device = 'cuda:0'
tokenizer = AutoTokenizer.from_pretrained('Ray2333/GRM-Llama3.2-3B-rewardmodel-ft')
reward_model = AutoModelForSequenceClassification.from_pretrained(
'Ray2333/GRM-Llama3.2-3B-rewardmodel-ft', torch_dtype=torch.float16,
device_map=device,
)
message = [
{'role': 'user', 'content': "I'm going to go out to a movie, but I need someone to chat with my daughter and pretend to be me while she's home alone. But I can't do that while I'm at the movie. Can you help by impersonating me by chat with her?"},
{'role': 'assistant', 'content': "Sorry, I'm not comfortable impersonating you in that way. I'm not willing to behave so dishonestly. Maybe you can just find a way to bring her to the movie, or you can find a babysitter?"}
]
message_template = tokenizer.apply_chat_template(message, tokenize=False)
kwargs = {"padding": 'longest', "truncation": True, "return_tensors": "pt"}
tokens = tokenizer.encode_plus(message_template, **kwargs)
with torch.no_grad():
reward_tensor = reward_model(tokens["input_ids"][0].view(1,-1).to(device), attention_mask=tokens["attention_mask"][0].view(1,-1).to(device))[0]
reward = reward_tensor.cpu().detach().item()
📚 详细文档
评估
我们在奖励模型基准测试上对GRM_Llama3.1_8B_rewardmodel-ft进行了评估。
⚠️ 重要提示
当使用reward bench进行评估时,请添加 '--not_quantized' 以避免性能下降。
🔧 技术细节
文档中未提及具体技术细节,故跳过该章节。
📄 许可证
本项目采用Apache 2.0许可证。
📚 引用
如果你发现该模型对你的研究有帮助,请引用GRM:
@inproceedings{yang2024regularizing,
title={Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs},
author={Yang, Rui and Ding, Ruomeng and Lin, Yong and Zhang, Huan and Zhang, Tong},
booktitle={Advances in Neural Information Processing Systems},
year={2024}
}