T5 Summary En Ru Zh Large 2048
模型简介
模型特点
模型能力
使用案例
🚀 用于英、俄、中文多语言文本摘要的T5模型
本模型旨在以多任务模式执行受控生成摘要文本内容的任务,并具备内置的俄、中、英三种语言翻译功能。它能有效解决不同语言文本摘要生成及翻译的问题,为跨语言信息处理提供了高效的解决方案。
🚀 快速开始
本模型是一个T5多任务模型,具备条件控制生成摘要文本内容并进行翻译的能力。它总共能理解12种指令,具体根据设定的前缀而定:
- "summary: " - 用于生成源语言的简单简洁内容
- "summary brief: " - 用于生成源语言的简短摘要内容
- "summary big: " - 用于生成源语言的详细摘要内容
你可以有条件地将输出限制为给定的N个单词,只需在任务后添加短语 "N words" 即可。
- "summary 20 words: " - 用于生成源语言的简单简洁内容
- "summary brief 4 words: " - 用于生成源语言的简短摘要内容
- "summary big 100 words: " - 用于生成源语言的详细摘要内容
单词级别的限制在小篇幅内容上比大篇幅内容效果更好。
该模型可以理解列表中任何语言(俄语、中文或英语)的文本,也能将结果翻译成列表中的任何语言(俄语、中文或英语)。
若要翻译成目标语言,需将目标语言标识符作为前缀 "... to
任务前缀如下: 4) "summary to en: " - 从多语言文本生成英文摘要内容 5) "summary brief to en: " - 从多语言文本生成英文的简短摘要内容 6) "summary big to en: " - 从多语言文本生成英文的详细摘要内容 7) "summary to ru: " - 从多语言文本生成俄文摘要内容 8) "summary brief to ru: " - 从多语言文本生成俄文的简短摘要内容 9) "summary big to ru: " - 从多语言文本生成俄文的详细摘要内容 10) "summary to zh: " - 从多语言文本生成中文摘要内容 11) "summary brief to zh: " - 从多语言文本生成中文的简短摘要内容 12) "summary big to zh: " - 从多语言文本生成中文的详细摘要内容
该训练模型可处理长达2048个标记的上下文,并在大任务中输出最多200个标记的摘要,在普通摘要任务中输出50个标记,在简短任务中输出20个标记。
带有基于单词数量的长度限制的翻译任务前缀示例如下:"summary brief to en 4 words: "
💻 使用示例
基础用法
以下是英文文本摘要生成的示例代码:
from transformers import T5ForConditionalGeneration, T5Tokenizer
device = 'cuda' #or 'cpu' for translate on cpu
model_name = 'utrobinmv/t5_summary_en_ru_zh_large_2048'
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
model.to(device)
generation_config = model.generation_config
# for quality generation
generation_config.length_penalty = 0.6
generation_config.no_repeat_ngram_size = 2
generation_config.num_beams = 10
tokenizer = T5Tokenizer.from_pretrained(model_name)
text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
# text summary generate
prefix = 'summary: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#YouTube to remove videos claiming approved COVID-19 vaccines cause harm, including autism, cancer, and infertility. It will terminate accounts of anti-vaccine influencers and expand its medical misinformation policies.
# text brief summary generate
prefix = 'summary brief: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#YouTube has announced a crackdown on misinformation about Covid-19 vaccines.
# generate a 4-word summary of the text
prefix = 'summary brief 4 words: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#YouTube removes vaccine misinformation.
# text big summary generate
prefix = 'summary big: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#YouTube, owned by Google, is removing videos claiming approved vaccines are dangerous and cause autism, cancer, or infertility. The company will terminate accounts of anti-vaccine influencers and expand its medical misinformation policies. This follows criticism of tech giants for not doing more to combat false health information on their sites. In July, US President Joe Biden called for social media platforms to address the issue of vaccine skepticism. Since implementing a ban on Covid vaccine content in 2021, 13 million videos have been removed. New policies cover long-approved vaccinations, such as those against measles or hepatitis B.
以下是中文文本翻译成英文摘要的示例代码:
from transformers import T5ForConditionalGeneration, T5Tokenizer
device = 'cuda' #or 'cpu' for translate on cpu
model_name = 'utrobinmv/t5_summary_en_ru_zh_large_2048'
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
model.to(device)
generation_config = model.generation_config
# for quality generation
generation_config.length_penalty = 0.6
generation_config.no_repeat_ngram_size = 2
generation_config.num_beams = 10
tokenizer = T5Tokenizer.from_pretrained(model_name)
text = """在北京冬奥会自由式滑雪女子坡面障碍技巧决赛中,中国选手谷爱凌夺得银牌。祝贺谷爱凌!今天上午,自由式滑雪女子坡面障碍技巧决赛举行。决赛分三轮进行,取选手最佳成绩排名决出奖牌。第一跳,中国选手谷爱凌获得69.90分。在12位选手中排名第三。完成动作后,谷爱凌又扮了个鬼脸,甚是可爱。第二轮中,谷爱凌在道具区第三个障碍处失误,落地时摔倒。获得16.98分。网友:摔倒了也没关系,继续加油!在第二跳失误摔倒的情况下,谷爱凌顶住压力,第三跳稳稳发挥,流畅落地!获得86.23分!此轮比赛,共12位选手参赛,谷爱凌第10位出场。网友:看比赛时我比谷爱凌紧张,加油!"""
# text summary generate
prefix = 'summary to en: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#In the women's freestyle skiing final at the Beijing Winter Olympics, Chinese skater Gu Ailing won silver. She scored 69.90 in the first jump, ranked 3rd among 12 competitors. Despite a fall, she managed to land smoothly, earning 86.23 points.
# text brief summary generate
prefix = 'summary brief to en: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#"Chinese Skier Wins Silver in Beijing"
# generate a 4-word summary of the text
prefix = 'summary brief to en 4 words: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#"Chinese Skier Wins Silver"
# text big summary generate
prefix = 'summary big to en: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#In the women's freestyle ski slope obstacle technique final at the Beijing Winter Olympics, Chinese skater Gu Ailing won silver. She scored 69.90 in her first jump, placing third among the 12 competitors. Despite a fall in the second round, she managed to land smoothly, earning 86.23 points. The final was held in three rounds.
以下是俄文文本摘要生成的示例代码:
from transformers import T5ForConditionalGeneration, T5Tokenizer
device = 'cuda' #or 'cpu' for translate on cpu
model_name = 'utrobinmv/t5_summary_en_ru_zh_large_2048'
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
model.to(device)
generation_config = model.generation_config
# for quality generation
generation_config.length_penalty = 0.6
generation_config.no_repeat_ngram_size = 2
generation_config.num_beams = 10
tokenizer = T5Tokenizer.from_pretrained(model_name)
text = """Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо."""
# text summary generate
prefix = 'summary: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#Эйфелева башня - самое высокое здание в Париже, высотой 324 метра. Ее основание квадратное, размером 125 метров с каждой стороны. Во время строительства она превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире.
# text brief summary generate
prefix = 'summary brief: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#Эйфелева башня - самое высокое здание в Париже, высотой 324 метра.
# generate a 4-word summary of the text
prefix = 'summary brief 4 words: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#Эйфелева башня - самая высокая.
# text big summary generate
prefix = 'summary big: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#Эйфелева башня - самое высокое здание в Париже, высотой 324 метра. Ее основание квадратное, размером 125 метров с каждой стороны. Во время строительства она превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире. Из-за добавления вещательной антенны на вершине башни она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо.
📚 详细文档
支持的语言
本模型支持的语言包括俄语(ru_RU)、中文(zh_CN)、英语(en_US)。
基础模型
基于 utrobinmv/t5_translate_en_ru_zh_large_1024_v2
基础模型构建。
示例数据
提供了多种语言的示例文本及对应的摘要生成示例,涵盖英文、中文、俄文的不同长度摘要生成以及跨语言翻译摘要生成。
📄 许可证
本项目采用 apache-2.0
许可证。



