🚀 roberta-base-wechsel-german
该模型使用WECHSEL进行训练,可有效初始化子词嵌入,实现单语言模型的跨语言迁移。
查看代码请访问:https://github.com/CPJKU/wechsel
查看论文请访问:https://aclanthology.org/2022.naacl-main.293/
🚀 快速开始
本项目提供了使用WECHSEL方法训练的语言模型,在多种语言和任务上展现出了良好的性能。你可以通过上述链接查看代码和论文获取更多详细信息。
✨ 主要特性
- 跨语言迁移:能够将单语言模型有效迁移到其他语言,减少训练成本。
- 性能优越:在多项任务(如NLI、NER、PPL等)上取得了比基准模型更优的成绩。
📚 详细文档
性能表现
RoBERTa模型
模型 |
NLI得分 |
NER得分 |
平均得分 |
roberta-base-wechsel-french |
82.43 |
90.88 |
86.65 |
camembert-base |
80.88 |
90.26 |
85.57 |
模型 |
NLI得分 |
NER得分 |
平均得分 |
roberta-base-wechsel-german |
81.79 |
89.72 |
85.76 |
deepset/gbert-base |
78.64 |
89.46 |
84.05 |
模型 |
NLI得分 |
NER得分 |
平均得分 |
roberta-base-wechsel-chinese |
78.32 |
80.55 |
79.44 |
bert-base-chinese |
76.55 |
82.05 |
79.30 |
模型 |
NLI得分 |
NER得分 |
平均得分 |
roberta-base-wechsel-swahili |
75.05 |
87.39 |
81.22 |
xlm-roberta-base |
69.18 |
87.37 |
78.28 |
GPT2模型
模型 |
PPL |
gpt2-wechsel-french |
19.71 |
gpt2 (从头开始重新训练) |
20.47 |
模型 |
PPL |
gpt2-wechsel-german |
26.8 |
gpt2 (从头开始重新训练) |
27.63 |
模型 |
PPL |
gpt2-wechsel-chinese |
51.97 |
gpt2 (从头开始重新训练) |
52.98 |
模型 |
PPL |
gpt2-wechsel-swahili |
10.14 |
gpt2 (从头开始重新训练) |
10.58 |
更多详细信息请查看我们的论文。
引用
请按照以下格式引用WECHSEL:
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
📄 许可证
本项目采用MIT许可证。