🚀 prophetnet-large-uncased
ProphetNet預訓練權重,助力序列到序列學習,解決文本生成難題,為自然語言處理提供強大支持。
🚀 快速開始
本項目提供了 ProphetNet 的預訓練權重。ProphetNet 是一種用於序列到序列學習的新型預訓練語言模型,它採用了一種名為未來 n-gram 預測的新型自監督目標。ProphetNet 能夠通過 n 流解碼器預測更多未來的標記。其原始實現是 Fairseq 版本,可在 github 倉庫 中找到。
💻 使用示例
基礎用法
這個預訓練模型可以在序列到序列任務上進行微調。例如,可以按如下方式在標題生成任務上訓練該模型:
from transformers import ProphetNetForConditionalGeneration, ProphetNetTokenizer
model = ProphetNetForConditionalGeneration.from_pretrained("microsoft/prophetnet-large-uncased")
tokenizer = ProphetNetTokenizer.from_pretrained("microsoft/prophetnet-large-uncased")
input_str = "the us state department said wednesday it had received no formal word from bolivia that it was expelling the us ambassador there but said the charges made against him are `` baseless ."
target_str = "us rejects charges against its ambassador in bolivia"
input_ids = tokenizer(input_str, return_tensors="pt").input_ids
labels = tokenizer(target_str, return_tensors="pt").input_ids
loss = model(input_ids, labels=labels).loss
📄 引用
如果您使用了本項目,請引用以下論文:
@article{yan2020prophetnet,
title={Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training},
author={Yan, Yu and Qi, Weizhen and Gong, Yeyun and Liu, Dayiheng and Duan, Nan and Chen, Jiusheng and Zhang, Ruofei and Zhou, Ming},
journal={arXiv preprint arXiv:2001.04063},
year={2020}
}