đ DeBERTa-v3-base Zero-Shot Classification Model
This repository provides a fine-tuned DeBERTa-v3-base
model for Zero-Shot Text Classification. It allows you to classify text into predefined categories without retraining the model for specific tasks. Ideal for scenarios with scarce labeled data or rapid prototyping of text classification solutions.
đ Quick Start
The model is designed to integrate seamlessly with Hugging Face's transformers
library. Here's a simple example:
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="syedkhalid076/DeBERTa-Zero-Shot-Classification")
sequence_to_classify = "Last week I upgraded my iOS version and ever since then your app is crashing."
candidate_labels = ["mobile", "website", "billing", "account access", "app crash"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
⨠Features
- Zero-Shot Classification: Directly classify text into any set of user-defined labels without additional training.
- Multi-Label Support: Handle tasks with overlapping categories or multiple applicable labels (set
multi_label=True
).
- Pretrained Efficiency: Optimized for inference using mixed-precision (
float16
) with SafeTensors.
đĻ Installation
The installation mainly involves setting up the transformers
library. You can install it using the following command:
pip install transformers
đģ Usage Examples
Basic Usage
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="syedkhalid076/DeBERTa-Zero-Shot-Classification")
sequence_to_classify = "Last week I upgraded my iOS version and ever since then your app is crashing."
candidate_labels = ["mobile", "website", "billing", "account access", "app crash"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
đ Documentation
Model Overview
Property |
Details |
Model Type |
DeBERTa-v3-base |
Architecture |
DebertaV2ForSequenceClassification |
Language |
English (en ) |
Data Type |
float16 (SafeTensor format for efficiency) |
This model leverages the capabilities of DeBERTa-v3-base
, fine-tuned on datasets like multi_nli
, facebook/anli
, and fever
to enhance its zero-shot classification performance.
Applications
I trained this model for UX research purposes, but it can be used for any of the following tasks:
- Customer Feedback Analysis: Categorize user reviews or feedback.
- Intent Detection: Identify user intents in conversational AI systems.
- Content Classification: Classify articles, social media posts, or documents.
- Error Detection: Detect error reports in logs or feedback.
Training Data
The model was fine-tuned on the following datasets:
- MultiNLI: Multi-genre natural language inference corpus.
- ANLI: Adversarial NLI dataset for robust entailment modeling.
- FEVER: Fact Extraction and Verification dataset.
These datasets help the model generalize across a wide range of zero-shot classification tasks.
Performance
This model demonstrates strong performance across various zero-shot classification benchmarks, effectively distinguishing between user-defined categories in diverse text inputs.
Limitations
- Language Support: Currently supports English (
en
) only.
- Context Length: Performance may degrade with extremely long text inputs. Consider truncating inputs to the model's max token length.
đ License
This model is licensed under the MIT License. You are free to use, modify, and distribute it with appropriate attribution.
Citation
If you use this model in your work, please cite this repository:
@misc{syedkhalid076_deberta_zeroshoot,
author = {Syed Khalid Hussain},
title = {DeBERTa Zero-Shot Classification},
year = {2024},
url = {https://huggingface.co/syedkhalid076/DeBERTa-Zero-Shot-Classification}
}
Acknowledgements
This model was fine-tuned using Hugging Face Transformers and hosted on the Hugging Face Model Hub. Special thanks to the creators of DeBERTa-v3
and the contributing datasets.