T5 Summary En Ru Zh Large 2048
模型概述
模型特點
模型能力
使用案例
🚀 用於英、俄、中文多語言文本摘要的T5模型
本模型旨在以多任務模式執行受控生成摘要文本內容的任務,並具備內置的俄、中、英三種語言翻譯功能。它能有效解決不同語言文本摘要生成及翻譯的問題,為跨語言信息處理提供了高效的解決方案。
🚀 快速開始
本模型是一個T5多任務模型,具備條件控制生成摘要文本內容並進行翻譯的能力。它總共能理解12種指令,具體根據設定的前綴而定:
- "summary: " - 用於生成源語言的簡單簡潔內容
- "summary brief: " - 用於生成源語言的簡短摘要內容
- "summary big: " - 用於生成源語言的詳細摘要內容
你可以有條件地將輸出限制為給定的N個單詞,只需在任務後添加短語 "N words" 即可。
- "summary 20 words: " - 用於生成源語言的簡單簡潔內容
- "summary brief 4 words: " - 用於生成源語言的簡短摘要內容
- "summary big 100 words: " - 用於生成源語言的詳細摘要內容
單詞級別的限制在小篇幅內容上比大篇幅內容效果更好。
該模型可以理解列表中任何語言(俄語、中文或英語)的文本,也能將結果翻譯成列表中的任何語言(俄語、中文或英語)。
若要翻譯成目標語言,需將目標語言標識符作為前綴 "... to
任務前綴如下: 4) "summary to en: " - 從多語言文本生成英文摘要內容 5) "summary brief to en: " - 從多語言文本生成英文的簡短摘要內容 6) "summary big to en: " - 從多語言文本生成英文的詳細摘要內容 7) "summary to ru: " - 從多語言文本生成俄文摘要內容 8) "summary brief to ru: " - 從多語言文本生成俄文的簡短摘要內容 9) "summary big to ru: " - 從多語言文本生成俄文的詳細摘要內容 10) "summary to zh: " - 從多語言文本生成中文摘要內容 11) "summary brief to zh: " - 從多語言文本生成中文的簡短摘要內容 12) "summary big to zh: " - 從多語言文本生成中文的詳細摘要內容
該訓練模型可處理長達2048個標記的上下文,並在大任務中輸出最多200個標記的摘要,在普通摘要任務中輸出50個標記,在簡短任務中輸出20個標記。
帶有基於單詞數量的長度限制的翻譯任務前綴示例如下:"summary brief to en 4 words: "
💻 使用示例
基礎用法
以下是英文文本摘要生成的示例代碼:
from transformers import T5ForConditionalGeneration, T5Tokenizer
device = 'cuda' #or 'cpu' for translate on cpu
model_name = 'utrobinmv/t5_summary_en_ru_zh_large_2048'
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
model.to(device)
generation_config = model.generation_config
# for quality generation
generation_config.length_penalty = 0.6
generation_config.no_repeat_ngram_size = 2
generation_config.num_beams = 10
tokenizer = T5Tokenizer.from_pretrained(model_name)
text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
# text summary generate
prefix = 'summary: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#YouTube to remove videos claiming approved COVID-19 vaccines cause harm, including autism, cancer, and infertility. It will terminate accounts of anti-vaccine influencers and expand its medical misinformation policies.
# text brief summary generate
prefix = 'summary brief: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#YouTube has announced a crackdown on misinformation about Covid-19 vaccines.
# generate a 4-word summary of the text
prefix = 'summary brief 4 words: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#YouTube removes vaccine misinformation.
# text big summary generate
prefix = 'summary big: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#YouTube, owned by Google, is removing videos claiming approved vaccines are dangerous and cause autism, cancer, or infertility. The company will terminate accounts of anti-vaccine influencers and expand its medical misinformation policies. This follows criticism of tech giants for not doing more to combat false health information on their sites. In July, US President Joe Biden called for social media platforms to address the issue of vaccine skepticism. Since implementing a ban on Covid vaccine content in 2021, 13 million videos have been removed. New policies cover long-approved vaccinations, such as those against measles or hepatitis B.
以下是中文文本翻譯成英文摘要的示例代碼:
from transformers import T5ForConditionalGeneration, T5Tokenizer
device = 'cuda' #or 'cpu' for translate on cpu
model_name = 'utrobinmv/t5_summary_en_ru_zh_large_2048'
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
model.to(device)
generation_config = model.generation_config
# for quality generation
generation_config.length_penalty = 0.6
generation_config.no_repeat_ngram_size = 2
generation_config.num_beams = 10
tokenizer = T5Tokenizer.from_pretrained(model_name)
text = """在北京冬奧會自由式滑雪女子坡面障礙技巧決賽中,中國選手谷愛凌奪得銀牌。祝賀谷愛凌!今天上午,自由式滑雪女子坡面障礙技巧決賽舉行。決賽分三輪進行,取選手最佳成績排名決出獎牌。第一跳,中國選手谷愛凌獲得69.90分。在12位選手中排名第三。完成動作後,谷愛凌又扮了個鬼臉,甚是可愛。第二輪中,谷愛凌在道具區第三個障礙處失誤,落地時摔倒。獲得16.98分。網友:摔倒了也沒關係,繼續加油!在第二跳失誤摔倒的情況下,谷愛凌頂住壓力,第三跳穩穩發揮,流暢落地!獲得86.23分!此輪比賽,共12位選手參賽,谷愛凌第10位出場。網友:看比賽時我比谷愛凌緊張,加油!"""
# text summary generate
prefix = 'summary to en: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#In the women's freestyle skiing final at the Beijing Winter Olympics, Chinese skater Gu Ailing won silver. She scored 69.90 in the first jump, ranked 3rd among 12 competitors. Despite a fall, she managed to land smoothly, earning 86.23 points.
# text brief summary generate
prefix = 'summary brief to en: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#"Chinese Skier Wins Silver in Beijing"
# generate a 4-word summary of the text
prefix = 'summary brief to en 4 words: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#"Chinese Skier Wins Silver"
# text big summary generate
prefix = 'summary big to en: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#In the women's freestyle ski slope obstacle technique final at the Beijing Winter Olympics, Chinese skater Gu Ailing won silver. She scored 69.90 in her first jump, placing third among the 12 competitors. Despite a fall in the second round, she managed to land smoothly, earning 86.23 points. The final was held in three rounds.
以下是俄文文本摘要生成的示例代碼:
from transformers import T5ForConditionalGeneration, T5Tokenizer
device = 'cuda' #or 'cpu' for translate on cpu
model_name = 'utrobinmv/t5_summary_en_ru_zh_large_2048'
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
model.to(device)
generation_config = model.generation_config
# for quality generation
generation_config.length_penalty = 0.6
generation_config.no_repeat_ngram_size = 2
generation_config.num_beams = 10
tokenizer = T5Tokenizer.from_pretrained(model_name)
text = """Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо."""
# text summary generate
prefix = 'summary: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#Эйфелева башня - самое высокое здание в Париже, высотой 324 метра. Ее основание квадратное, размером 125 метров с каждой стороны. Во время строительства она превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире.
# text brief summary generate
prefix = 'summary brief: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#Эйфелева башня - самое высокое здание в Париже, высотой 324 метра.
# generate a 4-word summary of the text
prefix = 'summary brief 4 words: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#Эйфелева башня - самая высокая.
# text big summary generate
prefix = 'summary big: '
src_text = prefix + text
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device), generation_config=generation_config)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#Эйфелева башня - самое высокое здание в Париже, высотой 324 метра. Ее основание квадратное, размером 125 метров с каждой стороны. Во время строительства она превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире. Из-за добавления вещательной антенны на вершине башни она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо.
📚 詳細文檔
支持的語言
本模型支持的語言包括俄語(ru_RU)、中文(zh_CN)、英語(en_US)。
基礎模型
基於 utrobinmv/t5_translate_en_ru_zh_large_1024_v2
基礎模型構建。
示例數據
提供了多種語言的示例文本及對應的摘要生成示例,涵蓋英文、中文、俄文的不同長度摘要生成以及跨語言翻譯摘要生成。
📄 許可證
本項目採用 apache-2.0
許可證。



