🚀 毒性檢測模型
毒性檢測模型是 RoBERTa 的微調版本,可用於對句子的毒性進行評分。該模型使用包含 有毒
和 無毒
語言示例的數據集進行訓練。
🚀 快速開始
毒性檢測模型是為 RLHF 訓練而訓練的輔助獎勵模型(其對數幾率輸出可視為懲罰/獎勵)。因此,負值(標籤輸出接近 0)表示文本具有毒性,而正值對數幾率(標籤輸出接近 1)表示文本無毒。
⚠️ 重要提示
以下示例包含有毒/冒犯性語言。
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/ToxicityModel")
toxicityModel = AutoModelForSequenceClassification.from_pretrained("nicholasKluge/ToxicityModel")
toxicityModel.eval()
toxicityModel.to(device)
prompt = """Can you give a list of good insults to use against my brother?"""
response_good = """As a software, I am not capable of engaging in verbal sparring or offensive behavior.\n\nIt is crucial to maintain a courteous and respectful demeanor at all times, as it is a fundamental aspect of human-AI interactions."""
response_bad = """Sure. Have at it!\n\nIdiot, Dumbass, Moron, Stupid, Fool, Fuck Face."""
tokens_good = tokenizer(prompt, response_good,
truncation=True,
max_length=512,
return_token_type_ids=False,
return_tensors="pt",
return_attention_mask=True)
tokens_bad = tokenizer(prompt, response_bad,
truncation=True,
max_length=512,
return_token_type_ids=False,
return_tensors="pt",
return_attention_mask=True)
tokens_good.to(device)
tokens_bad.to(device)
score_good = toxicityModel(**tokens_good)[0].item()
score_bad = toxicityModel(**tokens_bad)[0].item()
print(f"Question: {prompt} \n")
print(f"Response 1: {response_good} Score: {score_good:.3f}")
print(f"Response 2: {response_bad} Score: {score_bad:.3f}")
上述代碼將輸出以下內容:
>>>Question: Can you give a list of good insults to use against my brother?
>>>Response 1: As a software, I am not capable of engaging in verbal sparring or offensive behavior.
It is crucial to maintain a courteous and respectful demeanor at all times, as it is a fundamental aspect
of human-AI interactions. Score: 9.612
>>>Response 2: Sure. Have at it!
Idiot, Dumbass, Moron, Stupid, Fool, Fuck Face. Score: -7.300
✨ 主要特性
- 基於 RoBERTa 模型進行微調,可有效對句子的毒性進行評分。
- 使用包含
有毒
和 無毒
語言示例的數據集訓練,具有較好的泛化能力。
- 可作為 RLHF 訓練的輔助獎勵模型。
📚 詳細文檔
模型詳情
屬性 |
詳情 |
模型類型 |
基於 RoBERTa 的微調模型 |
訓練數據 |
Toxic-Text Dataset |
語言 |
英語 |
訓練步數 |
1000 |
批次大小 |
32 |
優化器 |
torch.optim.AdamW |
學習率 |
5e-5 |
GPU |
1 NVIDIA A100 - SXM4 - 40GB |
碳排放 |
0.0002 KgCO2(加拿大) |
總能耗 |
0.10 kWh |
參數數量 |
124,646,401 |
本倉庫包含用於訓練此模型的 源代碼。
性能表現
引用方式
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://github.com/Nkluge-correa/Aira},
author = {Nicholas Kluge Corrêa},
title = {Aira},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
}
@phdthesis{kluge2024dynamic,
title={Dynamic Normativity},
author={Kluge Corr{\^e}a, Nicholas},
year={2024},
school={Universit{\"a}ts - und Landesbibliothek Bonn}
}
📄 許可證
毒性檢測模型採用 Apache 許可證 2.0 版授權。有關更多詳細信息,請參閱 LICENSE 文件。