🚀 Adapter AdapterHub/roberta-base-pf-imdb
for roberta-base
這是一個適用於roberta-base
模型的適配器,它在sentiment/imdb數據集上進行訓練,幷包含一個用於分類的預測頭。
此適配器是為與**adapter-transformers**庫配合使用而創建的。
🚀 快速開始
安裝依賴
首先,安裝adapter-transformers
:
pip install -U adapter-transformers
⚠️ 重要提示
adapter-transformers 是 transformers 的一個分支,可作為支持適配器的直接替代品。更多信息
加載並激活適配器
現在,可以按如下方式加載並激活適配器:
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-imdb", source="hf")
model.active_adapters = adapter_name
🔧 技術細節
架構與訓練
此適配器的訓練代碼可在 https://github.com/adapter-hub/efficient-task-transfer 找到。
特別地,所有任務的訓練配置可在此處找到。
評估結果
有關評估結果的更多信息,請參考論文。
📄 許可證
引用
如果您使用此適配器,請引用我們的論文 "What to Pre-Train on? Efficient Intermediate Task Selection":
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}