🚀 roberta-base-wechsel-german
該模型使用WECHSEL進行訓練,可有效初始化子詞嵌入,實現單語言模型的跨語言遷移。
查看代碼請訪問:https://github.com/CPJKU/wechsel
查看論文請訪問:https://aclanthology.org/2022.naacl-main.293/
🚀 快速開始
本項目提供了使用WECHSEL方法訓練的語言模型,在多種語言和任務上展現出了良好的性能。你可以通過上述鏈接查看代碼和論文獲取更多詳細信息。
✨ 主要特性
- 跨語言遷移:能夠將單語言模型有效遷移到其他語言,減少訓練成本。
- 性能優越:在多項任務(如NLI、NER、PPL等)上取得了比基準模型更優的成績。
📚 詳細文檔
性能表現
RoBERTa模型
模型 |
NLI得分 |
NER得分 |
平均得分 |
roberta-base-wechsel-french |
82.43 |
90.88 |
86.65 |
camembert-base |
80.88 |
90.26 |
85.57 |
模型 |
NLI得分 |
NER得分 |
平均得分 |
roberta-base-wechsel-german |
81.79 |
89.72 |
85.76 |
deepset/gbert-base |
78.64 |
89.46 |
84.05 |
模型 |
NLI得分 |
NER得分 |
平均得分 |
roberta-base-wechsel-chinese |
78.32 |
80.55 |
79.44 |
bert-base-chinese |
76.55 |
82.05 |
79.30 |
模型 |
NLI得分 |
NER得分 |
平均得分 |
roberta-base-wechsel-swahili |
75.05 |
87.39 |
81.22 |
xlm-roberta-base |
69.18 |
87.37 |
78.28 |
GPT2模型
模型 |
PPL |
gpt2-wechsel-french |
19.71 |
gpt2 (從頭開始重新訓練) |
20.47 |
模型 |
PPL |
gpt2-wechsel-german |
26.8 |
gpt2 (從頭開始重新訓練) |
27.63 |
模型 |
PPL |
gpt2-wechsel-chinese |
51.97 |
gpt2 (從頭開始重新訓練) |
52.98 |
模型 |
PPL |
gpt2-wechsel-swahili |
10.14 |
gpt2 (從頭開始重新訓練) |
10.58 |
更多詳細信息請查看我們的論文。
引用
請按照以下格式引用WECHSEL:
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
📄 許可證
本項目採用MIT許可證。