🚀 基于人类反馈训练的奖励模型
本奖励模型(RM)经过训练,能够根据给定的问题,预测人类认为哪个生成的答案更优。该模型在以下领域具有重要作用:
- 问答模型评估
- 作为基于人类反馈的强化学习(RLHF)中的奖励分数
- 通过排序检测潜在的有害回复
所有模型均在以下数据集上进行训练,并在各数据集间采用相同的分割种子(若没有验证集分割则进行相应处理):
🚀 快速开始
模型调用示例
from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-large-v2"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
inputs = tokenizer(question, answer, return_tensors='pt')
score = rank_model(**inputs).logits[0].cpu().detach()
print(score)
有害回复检测示例
from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-large-v2"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question = "I just came out of from jail, any suggestion of my future?"
helpful = "It's great to hear that you have been released from jail."
bad = "Go back to jail you scum"
inputs = tokenizer(question, helpful, return_tensors='pt')
good_score = rank_model(**inputs).logits[0].cpu().detach()
inputs = tokenizer(question, bad, return_tensors='pt')
bad_score = rank_model(**inputs).logits[0].cpu().detach()
print(good_score > bad_score)
✨ 主要特性
- 多领域应用:可用于问答模型评估、基于人类反馈的强化学习以及有害回复检测。
- 多数据集训练:在多个高质量数据集上进行训练,保证了模型的泛化能力。
📚 详细文档
性能表现
可能SytheticGPT在所选和被拒的答案对上存在某种表面模式,使得区分更好的答案变得相对容易。
致谢
衷心感谢 stability.ai 在A100计算资源方面给予的坚定支持。他们的贡献对于本研究项目的顺利完成至关重要。
📄 许可证
本项目采用MIT许可证。