🚀 IceBERT
IceBERT是使用fairseq基于RoBERTa-base架构训练的冰岛语语言模型,可用于多种自然语言处理下游任务。
🚀 快速开始
该模型使用fairseq并基于RoBERTa-base架构进行训练。它是我们为冰岛语训练的众多模型之一,更多详细信息请参阅下面引用的论文。训练使用的数据如下表所示。
数据集 |
大小 |
词元数量 |
冰岛语千兆词料库v20.05 (IGC) |
8.2 GB |
1,388M |
冰岛语通用爬虫语料库 (IC3) |
4.9 GB |
824M |
Greynir新闻文章 |
456 MB |
76M |
冰岛萨迦 |
9 MB |
1.7M |
开放冰岛电子书籍 (Rafbókavefurinn) |
14 MB |
2.6M |
兰德斯皮塔里医院医学图书馆的数据 |
33 MB |
5.2M |
冰岛大学学生论文 (Skemman) |
2.2 GB |
367M |
总计 |
15.8 GB |
2,664M |
📚 详细文档
该模型在论文 https://arxiv.org/abs/2201.05601 中有详细描述。如果您使用了该模型,请引用此论文。
@inproceedings{snaebjarnarson-etal-2022-warm,
title = "A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models",
author = "Sn{\ae}bjarnarson, V{\'e}steinn and
S{\'\i}monarson, Haukur Barri and
Ragnarsson, P{\'e}tur Orri and
Ing{\'o}lfsd{\'o}ttir, Svanhv{\'\i}t Lilja and
J{\'o}nsson, Haukur and
Thorsteinsson, Vilhjalmur and
Einarsson, Hafsteinn",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.464",
pages = "4356--4366",
abstract = "We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain .is. Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we manually translate and adapt the WinoGrande commonsense reasoning dataset. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.",
}
📄 许可证
本模型采用CC BY 4.0许可证。