🚀 gpt2-wechsel-chinese
本项目是使用 WECHSEL 训练的模型,该方法能有效初始化子词嵌入,用于单语言模型的跨语言迁移。
你可以在以下链接查看代码:https://github.com/CPJKU/wechsel
查看相关论文:https://aclanthology.org/2022.naacl-main.293/
✨ 主要特性
本项目主要展示了使用 WECHSEL 方法训练的不同语言模型在多项任务上的性能,并与其他基准模型进行了对比。
📚 详细文档
性能表现
RoBERTa 模型性能
模型 |
NLI 得分 |
NER 得分 |
平均得分 |
roberta-base-wechsel-french |
82.43 |
90.88 |
86.65 |
camembert-base |
80.88 |
90.26 |
85.57 |
模型 |
NLI 得分 |
NER 得分 |
平均得分 |
roberta-base-wechsel-german |
81.79 |
89.72 |
85.76 |
deepset/gbert-base |
78.64 |
89.46 |
84.05 |
模型 |
NLI 得分 |
NER 得分 |
平均得分 |
roberta-base-wechsel-chinese |
78.32 |
80.55 |
79.44 |
bert-base-chinese |
76.55 |
82.05 |
79.30 |
模型 |
NLI 得分 |
NER 得分 |
平均得分 |
roberta-base-wechsel-swahili |
75.05 |
87.39 |
81.22 |
xlm-roberta-base |
69.18 |
87.37 |
78.28 |
GPT2 模型性能
模型 |
困惑度(PPL) |
gpt2-wechsel-french |
19.71 |
gpt2 (从头开始重新训练) |
20.47 |
模型 |
困惑度(PPL) |
gpt2-wechsel-german |
26.8 |
gpt2 (从头开始重新训练) |
27.63 |
模型 |
困惑度(PPL) |
gpt2-wechsel-chinese |
51.97 |
gpt2 (从头开始重新训练) |
52.98 |
模型 |
困惑度(PPL) |
gpt2-wechsel-swahili |
10.14 |
gpt2 (从头开始重新训练) |
10.58 |
更多详细信息请参考我们的论文。
📄 许可证
本项目采用 MIT 许可证。
🔗 引用
如果你使用了 WECHSEL 方法,请按照以下格式进行引用:
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}