D

Deberta Base Long Nli

Developed by tasksource
Based on the DeBERTa-v3-base model, the context length is extended to 1280, and fine-tuned for 250,000 steps on the tasksource dataset, focusing on natural language inference and zero-shot classification tasks.
Downloads 541
Release Time : 6/28/2024

Model Overview

This model demonstrates strong zero-shot verification performance on multiple NLI tasks, and can be used for zero-shot classification with arbitrary labels based on entailment relationships, natural language inference, and further fine-tuning on new tasks.

Model Features

Long-text processing capability
Context length extended to 1280, particularly suitable for long-text NLI tasks
Multi-task training
Trained on diverse NLI datasets from tasksource, covering various task types such as logical reasoning and fact-checking
Strong zero-shot capability
Achieves 70% accuracy on tasks like WNLI without task-specific fine-tuning

Model Capabilities

Zero-shot classification
Natural language inference
Logical reasoning
Fact-checking
Textual entailment judgment

Use Cases

Text classification
Zero-shot sentiment analysis
Performs sentiment classification on text without training
Achieves 72.2% accuracy on the chatbot_arena_conversations dataset
Logical reasoning
Logical question answering
Solves NLI problems requiring logical reasoning
Achieves 61.8% accuracy on the FOLIO dataset
Fact-checking
Document-level fact-checking
Handles fact-checking tasks for long documents
Achieves 90% accuracy on the doc-nli dataset
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase