🚀 prophetnet-large-uncased
ProphetNet预训练权重,助力序列到序列学习,解决文本生成难题,为自然语言处理提供强大支持。
🚀 快速开始
本项目提供了 ProphetNet 的预训练权重。ProphetNet 是一种用于序列到序列学习的新型预训练语言模型,它采用了一种名为未来 n-gram 预测的新型自监督目标。ProphetNet 能够通过 n 流解码器预测更多未来的标记。其原始实现是 Fairseq 版本,可在 github 仓库 中找到。
💻 使用示例
基础用法
这个预训练模型可以在序列到序列任务上进行微调。例如,可以按如下方式在标题生成任务上训练该模型:
from transformers import ProphetNetForConditionalGeneration, ProphetNetTokenizer
model = ProphetNetForConditionalGeneration.from_pretrained("microsoft/prophetnet-large-uncased")
tokenizer = ProphetNetTokenizer.from_pretrained("microsoft/prophetnet-large-uncased")
input_str = "the us state department said wednesday it had received no formal word from bolivia that it was expelling the us ambassador there but said the charges made against him are `` baseless ."
target_str = "us rejects charges against its ambassador in bolivia"
input_ids = tokenizer(input_str, return_tensors="pt").input_ids
labels = tokenizer(target_str, return_tensors="pt").input_ids
loss = model(input_ids, labels=labels).loss
📄 引用
如果您使用了本项目,请引用以下论文:
@article{yan2020prophetnet,
title={Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training},
author={Yan, Yu and Qi, Weizhen and Gong, Yeyun and Liu, Dayiheng and Duan, Nan and Chen, Jiusheng and Zhang, Ruofei and Zhou, Ming},
journal={arXiv preprint arXiv:2001.04063},
year={2020}
}