🚀 DeBERTa: Decoding-enhanced BERT with Disentangled Attention
DeBERTa is a model that improves upon BERT and RoBERTa by using disentangled attention and an enhanced mask decoder. It outperforms BERT and RoBERTa in most NLU tasks with 80GB of training data. This README provides details about the DeBERTa V2 xxlarge model and its performance on various NLU tasks.
✨ Features
- Enhanced Architecture: DeBERTa uses disentangled attention and an enhanced mask decoder to improve upon BERT and RoBERTa.
- High Performance: It outperforms BERT and RoBERTa in most NLU tasks with 80GB of training data.
- Multiple Sizes: Available in different sizes, including Large, XLarge, V2-XLarge, and V2-XXLarge.
📦 Installation
To run the DeBERTa V2-XXLarge model, you need to install the necessary dependencies. You can install them using the following commands:
pip install datasets
pip install deepspeed
💻 Usage Examples
Basic Usage
To run the model with Deepspeed
, you can use the following commands:
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \
run_glue.py \
--model_name_or_path microsoft/deberta-v2-xxlarge \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 256 \
--per_device_train_batch_size ${batch_size} \
--learning_rate 3e-6 \
--num_train_epochs 3 \
--output_dir $output_dir \
--overwrite_output_dir \
--logging_steps 10 \
--logging_dir $output_dir \
--deepspeed ds_config.json
Advanced Usage
You can also run the model with --sharded_ddp
using the following commands:
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py \
--model_name_or_path microsoft/deberta-v2-xxlarge \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 256 \
--per_device_train_batch_size 8 \
--learning_rate 3e-6 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir \
--sharded_ddp \
--fp16
📚 Documentation
Fine-tuning on NLU tasks
The following table shows the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks:
Model |
SQuAD 1.1 |
SQuAD 2.0 |
MNLI-m/mm |
SST-2 |
QNLI |
CoLA |
RTE |
MRPC |
QQP |
STS-B |
|
F1/EM |
F1/EM |
Acc |
Acc |
Acc |
MCC |
Acc |
Acc/F1 |
Acc/F1 |
P/S |
BERT-Large |
90.9/84.1 |
81.8/79.0 |
86.6/- |
93.2 |
92.3 |
60.6 |
70.4 |
88.0/- |
91.3/- |
90.0/- |
RoBERTa-Large |
94.6/88.9 |
89.4/86.5 |
90.2/- |
96.4 |
93.9 |
68.0 |
86.6 |
90.9/- |
92.2/- |
92.4/- |
XLNet-Large |
95.1/89.7 |
90.6/87.9 |
90.8/- |
97.0 |
94.9 |
69.0 |
85.9 |
90.8/- |
92.3/- |
92.5/- |
DeBERTa-Large1 |
95.5/90.1 |
90.7/88.0 |
91.3/91.1 |
96.5 |
95.3 |
69.5 |
91.0 |
92.6/94.6 |
92.3/- |
92.8/92.5 |
DeBERTa-XLarge1 |
-/- |
-/- |
91.5/91.2 |
97.0 |
- |
- |
93.1 |
92.1/94.3 |
- |
92.9/92.7 |
DeBERTa-V2-XLarge1 |
95.8/90.8 |
91.4/88.9 |
91.7/91.6 |
97.5 |
95.8 |
71.1 |
93.9 |
92.0/94.2 |
92.3/89.8 |
92.9/92.9 |
DeBERTa-V2-XXLarge1,2 |
96.1/91.4 |
92.2/89.7 |
91.7/91.9 |
97.2 |
96.0 |
72.0 |
93.5 |
93.1/94.9 |
92.7/90.3 |
93.2/93.1 |
Notes
- 1 Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on DeBERTa-Large-MNLI, DeBERTa-XLarge-MNLI, DeBERTa-V2-XLarge-MNLI, DeBERTa-V2-XXLarge-MNLI. The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- 2 To try the XXLarge model with HF transformers, we recommand using deepspeed as it's faster and saves memory.
Citation
If you find DeBERTa useful for your work, please cite the following paper:
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
📄 License
This project is licensed under the MIT License.