đ DeBERTa-v3-base-mnli-fever-anli
This model is built on the MoritzLaurer/DeBERTa-v3-base-mnli
repository. It includes a handler.py
file, which makes it easier to use the model in inference endpoints for zero-shot classification, comparing a premise and a hypothesis as entailment, neutral, or contradiction.
đ Quick Start
⨠Features
- Based on the
DeBERTa-v3-base
model from Microsoft, with significant performance improvements in the v3 version.
- Trained on the MultiNLI dataset, which contains 392,702 NLI hypothesis-premise pairs.
- Can be used for zero-shot classification to determine the relationship between a premise and a hypothesis.
đĻ Installation
No specific installation steps are provided in the original document.
đģ Usage Examples
Basic Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/DeBERTa-v3-base-mnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device))
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
Example cURL
curl YOUR INFERENCE ENDPOINT URL HERE \
-X POST \
-d '{"inputs": {"premise": "A man is walking his dog in the park.", "hypothesis": "A person is outside with an animal."}}' \
-H "Authorization: Bearer hf_YOUR_TOKEN_HERE" \
-H "Content-Type: application/json"
đ Documentation
Model description
This model was trained on the MultiNLI dataset, which consists of 392,702 NLI hypothesis-premise pairs. The base model is DeBERTa-v3-base from Microsoft. The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original DeBERTa paper. For a more powerful model, check out DeBERTa-v3-base-mnli-fever-anli which was trained on even more data.
Intended uses & limitations
How to use the model
The usage example is provided above in the "Usage Examples" section.
đ§ Technical Details
Training data
This model was trained on the MultiNLI dataset, which consists of 392,702 NLI hypothesis-premise pairs.
Training procedure
DeBERTa-v3-base-mnli was trained using the Hugging Face trainer with the following hyperparameters.
training_args = TrainingArguments(
num_train_epochs=5,
learning_rate=2e-05,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
warmup_ratio=0.1,
weight_decay=0.06,
fp16=True
)
Eval results
The model was evaluated using the matched test set and achieves 0.90 accuracy.
đ License
No license information is provided in the original document.
Limitations and bias
Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
BibTeX entry and citation info
If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn
Debugging and issues
Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues.
Model Recycling
Evaluation on 36 datasets using MoritzLaurer/DeBERTa-v3-base-mnli as a base model yields an average score of 80.01 in comparison to 79.04 by microsoft/deberta-v3-base.
The model is ranked 1st among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023.
Results:
20_newsgroup |
ag_news |
amazon_reviews_multi |
anli |
boolq |
cb |
cola |
copa |
dbpedia |
esnli |
financial_phrasebank |
imdb |
isear |
mnli |
mrpc |
multirc |
poem_sentiment |
qnli |
qqp |
rotten_tomatoes |
rte |
sst2 |
sst_5bins |
stsb |
trec_coarse |
trec_fine |
tweet_ev_emoji |
tweet_ev_emotion |
tweet_ev_hate |
tweet_ev_irony |
tweet_ev_offensive |
tweet_ev_sentiment |
wic |
wnli |
wsc |
yahoo_answers |
86.0196 |
90.6333 |
66.96 |
60.0938 |
83.792 |
83.9286 |
86.5772 |
72 |
79.2 |
91.419 |
85.1 |
94.232 |
71.5124 |
89.4426 |
90.4412 |
63.7583 |
86.5385 |
93.8129 |
91.9144 |
89.8687 |
85.9206 |
95.4128 |
57.3756 |
91.377 |
97.4 |
91 |
47.302 |
83.6031 |
57.6431 |
77.1684 |
83.3721 |
70.2947 |
71.7868 |
67.6056 |
74.0385 |
71.7 |
For more information, see: Model Recycling