🚀 Randeng-Pegasus-523M-Summary-Chinese-V1
A Chinese version of PAGASUS-large fine-tuned on multiple Chinese text summarization datasets, excelling at text summarization tasks.
🚀 Quick Start
This model is a Chinese version of PAGASUS-large, which is good at solving text summarization tasks. It has been fine-tuned on multiple Chinese text summarization datasets.
✨ Features
- Task Suitability: Specialized in text summarization tasks.
- Fine-tuning: Fine-tuned on multiple Chinese text summarization datasets to enhance performance.
📦 Model Taxonomy
Property |
Details |
Demand |
General |
Task |
Natural Language Transformation (NLT) |
Series |
Randeng |
Model |
PEFASUS |
Parameter |
523M |
Extra |
Text Summarization Task - Chinese |
📚 Documentation
Model Information
Reference Paper: PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
Based on Randeng-Pegasus-523M-Chinese, we re-fine-tuned the model on a filtered dataset (about 1.8M samples) obtained by filtering 7 Chinese text summarization datasets (about 4M samples in total) using entity filtering. This process improved the faithfulness of the summaries to the original text without degrading the downstream metrics, resulting in the summary-v1 version. The 7 datasets are: education, new2016zh, nlpcc, shence, sohu, thucnews, and weibo.
Performance
Datasets |
Rouge-1 |
Rouge-2 |
Rouge-L |
LCSTS |
46.94 |
33.92 |
43.51 |
💻 Usage Examples
Basic Usage
from transformers import PegasusForConditionalGeneration
from tokenizers_pegasus import PegasusTokenizer
model = PegasusForConditionalGeneration.from_pretrained("IDEA-CCNL/Randeng-Pegasus-523M-Summary-Chinese-V1")
tokenizer = PegasusTokenizer.from_pretrained("IDEA-CCNL/Randeng-Pegasus-523M-Summary-Chinese-V1")
text = "在北京冬奥会自由式滑雪女子坡面障碍技巧决赛中,中国选手谷爱凌夺得银牌。祝贺谷爱凌!今天上午,自由式滑雪女子坡面障碍技巧决赛举行。决赛分三轮进行,取选手最佳成绩排名决出奖牌。第一跳,中国选手谷爱凌获得69.90分。在12位选手中排名第三。完成动作后,谷爱凌又扮了个鬼脸,甚是可爱。第二轮中,谷爱凌在道具区第三个障碍处失误,落地时摔倒。获得16.98分。网友:摔倒了也没关系,继续加油!在第二跳失误摔倒的情况下,谷爱凌顶住压力,第三跳稳稳发挥,流畅落地!获得86.23分!此轮比赛,共12位选手参赛,谷爱凌第10位出场。网友:看比赛时我比谷爱凌紧张,加油!"
inputs = tokenizer(text, max_length=1024, return_tensors="pt")
summary_ids = model.generate(inputs["input_ids"])
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
📄 Citation
If you are using the resource for your work, please cite our paper:
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
You can also cite our website:
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}