🚀 mT5-m2m-CrossSum
本仓库包含在 CrossSum 数据集的所有跨语言对上微调的多对多(m2m)mT5 检查点。该模型旨在将用任何语言撰写的文本总结为指定的目标语言。有关微调细节和脚本,请参阅 论文 和 官方仓库。
🚀 快速开始
本模型可在 transformers
库(在 4.11.0.dev0 版本上测试)中使用,以下是使用示例:
import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
model_name = "csebuetnlp/mT5_m2m_crossSum"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
get_lang_id = lambda lang: tokenizer._convert_token_to_id(
model.config.task_specific_params["langid_map"][lang][1]
)
target_lang = "english"
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
decoder_start_token_id=get_lang_id(target_lang),
max_length=84,
no_repeat_ngram_size=2,
num_beams=4,
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
可用的目标语言名称
amharic
(阿姆哈拉语)
arabic
(阿拉伯语)
azerbaijani
(阿塞拜疆语)
bengali
(孟加拉语)
burmese
(缅甸语)
chinese_simplified
(简体中文)
chinese_traditional
(繁体中文)
english
(英语)
french
(法语)
gujarati
(古吉拉特语)
hausa
(豪萨语)
hindi
(印地语)
igbo
(伊博语)
indonesian
(印尼语)
japanese
(日语)
kirundi
(基隆迪语)
korean
(韩语)
kyrgyz
(吉尔吉斯语)
marathi
(马拉地语)
nepali
(尼泊尔语)
oromo
(奥罗莫语)
pashto
(普什图语)
persian
(波斯语)
pidgin
(皮钦语)
portuguese
(葡萄牙语)
punjabi
(旁遮普语)
russian
(俄语)
scottish_gaelic
(苏格兰盖尔语)
serbian_cyrillic
(塞尔维亚语西里尔文)
serbian_latin
(塞尔维亚语拉丁字母)
sinhala
(僧伽罗语)
somali
(索马里语)
spanish
(西班牙语)
swahili
(斯瓦希里语)
tamil
(泰米尔语)
telugu
(泰卢固语)
thai
(泰语)
tigrinya
(提格雷尼亚语)
turkish
(土耳其语)
ukrainian
(乌克兰语)
urdu
(乌尔都语)
uzbek
(乌兹别克语)
vietnamese
(越南语)
welsh
(威尔士语)
yoruba
(约鲁巴语)
📄 许可证
本项目采用 cc-by-nc-sa-4.0
许可证。
📚 引用
如果您使用了此模型,请引用以下论文:
@article{hasan2021crosssum,
author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar},
title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs},
journal = {CoRR},
volume = {abs/2112.08804},
year = {2021},
url = {https://arxiv.org/abs/2112.08804},
eprinttype = {arXiv},
eprint = {2112.08804}
}