🚀 ElasticBERT-BASE
ElasticBERT-BASE是ElasticBERT模型的基礎版本實現,可用於自然語言處理任務,為高效NLP提供了標準評估和強大的基線。
🚀 快速開始
本部分將展示如何使用ElasticBERT-BASE進行序列分類任務。
基礎用法
>>> from transformers import BertTokenizer as ElasticBertTokenizer
>>> from models.configuration_elasticbert import ElasticBertConfig
>>> from models.modeling_elasticbert import ElasticBertForSequenceClassification
>>> num_output_layers = 1
>>> config = ElasticBertConfig.from_pretrained('fnlp/elasticbert-base', num_output_layers=num_output_layers )
>>> tokenizer = ElasticBertTokenizer.from_pretrained('fnlp/elasticbert-base')
>>> model = ElasticBertForSequenceClassification.from_pretrained('fnlp/elasticbert-base', config=config)
>>> input_ids = tokenizer.encode('The actors are fantastic .', return_tensors='pt')
>>> outputs = model(input_ids)
📚 詳細文檔
模型描述
這是ElasticBERT的基礎
版本實現。相關論文為 Towards Efficient NLP: A Standard Evaluation and A Strong Baseline,作者包括Xiangyang Liu、Tianxiang Sun、Junliang He、Lingling Wu、Xinyu Zhang、Hao Jiang、Zhao Cao、Xuanjing Huang和Xipeng Qiu。
代碼鏈接
可在 fastnlp/elasticbert 查看相關代碼。
數據集
該模型使用了以下數據集進行訓練:
屬性 |
詳情 |
訓練數據 |
wikipedia、bookcorpus、c4 |
引用信息
如果您使用了該模型,請按照以下格式引用:
@article{liu2021elasticbert,
author = {Xiangyang Liu and
Tianxiang Sun and
Junliang He and
Lingling Wu and
Xinyu Zhang and
Hao Jiang and
Zhao Cao and
Xuanjing Huang and
Xipeng Qiu},
title = {Towards Efficient {NLP:} {A} Standard Evaluation and {A} Strong Baseline},
journal = {CoRR},
volume = {abs/2110.07038},
year = {2021},
url = {https://arxiv.org/abs/2110.07038},
eprinttype = {arXiv},
eprint = {2110.07038},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-07038.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}