D

Deberta Small Long Nli

Developed by tasksource
Based on the DeBERTa-v3-small model, extended context length to 1680 tokens, fine-tuned on the tasksource dataset, suitable for long-text natural language inference tasks
Downloads 40.85k
Release Time : 1/31/2024

Model Overview

This model is a DeBERTa variant optimized for long-context natural language inference tasks, supporting various tasks such as zero-shot classification and natural language inference, and can serve as a base model for fine-tuning reward models or classifiers

Model Features

Long-context support
Context length extended to 1680 tokens, suitable for handling long-text reasoning tasks
Multi-task training
Fine-tuned for 250,000 steps on the tasksource dataset, covering various NLI tasks
Zero-shot capability
Demonstrates strong zero-shot performance on multiple tasks, such as achieving 70% accuracy on WNLI

Model Capabilities

Natural language inference
Zero-shot classification
Text classification
Multi-task learning

Use Cases

Text analysis
Zero-shot classification
Performs zero-shot classification for arbitrary labels using entailment relationships
Achieves 70% accuracy on tasks like WNLI
Logical reasoning
Handles long-text tasks requiring logical reasoning
Performs well on logical reasoning tasks like FOLIO
Model development
Reward model foundation
Can serve as a base model for fine-tuning reward models or classifiers
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase