🚀 mT5-m2o-russian-CrossSum
本仓库包含在CrossSum数据集的所有跨语言对上微调的多对一(m2o)mT5检查点,其中目标摘要为俄语,即该模型尝试将任何语言的文本总结为俄语。有关微调细节和脚本,请参阅论文和官方仓库。
🚀 快速开始
✨ 主要特性
- 支持多种语言:该模型支持阿姆哈拉语(am)、阿拉伯语(ar)、阿塞拜疆语(az)、孟加拉语(bn)、缅甸语(my)、中文(zh)、英语(en)、法语(fr)等众多语言。
- 多对一总结:能够将多种语言的文本总结为俄语。
📦 安装指南
文档未提及安装步骤,暂不提供相关内容。
💻 使用示例
基础用法
import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
model_name = "csebuetnlp/mT5_m2o_russian_crossSum"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
📚 详细文档
文档未提及详细说明,暂不提供相关内容。
🔧 技术细节
文档未提及技术实现细节,暂不提供相关内容。
📄 许可证
本项目采用CC BY-NC-SA 4.0许可证。
📚 引用
如果您使用此模型,请引用以下论文:
@article{hasan2021crosssum,
author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar},
title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs},
journal = {CoRR},
volume = {abs/2112.08804},
year = {2021},
url = {https://arxiv.org/abs/2112.08804},
eprinttype = {arXiv},
eprint = {2112.08804}
}
📋 模型信息
属性 |
详情 |
模型类型 |
mT5多对一(m2o)微调模型 |
训练数据 |
CrossSum数据集的所有跨语言对 |