🚀 ptt5-v2-small
ptt5-v2模型是專門為葡萄牙語量身定製的預訓練T5模型,它基於谷歌原始的檢查點繼續訓練,模型大小從t5-small到t5-3B不等。這些檢查點用於訓練葡萄牙語的MonoT5重排器,你可以在其HuggingFace集合中找到這些重排器。如需瞭解更多關於預訓練過程的信息,請參考我們的論文ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language。
🚀 快速開始
數據集
- allenai/c4
- legacy-datasets/mc4
語言
任務類型
text2text-generation
基礎模型
google-t5/t5-small
許可證
apache-2.0
✨ 主要特性
- 專為葡萄牙語設計的預訓練T5模型。
- 基於谷歌原始檢查點繼續訓練。
- 可用於訓練葡萄牙語的MonoT5重排器。
💻 使用示例
基礎用法
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("unicamp-dl/ptt5-v2-small")
model = T5ForConditionalGeneration.from_pretrained("unicamp-dl/ptt5-v2-small")
📄 許可證
本項目採用apache-2.0許可證。
📚 詳細文檔
如需瞭解更多關於預訓練過程的信息,請參考我們的論文ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language。
📚 引用
如果您使用了我們的模型,請引用以下文獻:
@misc{piau2024ptt5v2,
title={ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language},
author={Marcos Piau and Roberto Lotufo and Rodrigo Nogueira},
year={2024},
eprint={2406.10806},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}