Z

Zero Shot Vanilla Bi Encoder

Developed by claritylab
BERT-based dual-encoder model, specifically designed for zero-shot text classification tasks, trained on the UTCD dataset
Downloads 27
Release Time : 5/15/2023

Model Overview

This model adopts a dual-encoder classification framework for zero-shot text classification tasks, capable of classifying new categories without task-specific training data

Model Features

Zero-shot learning capability
Capable of classifying new categories without task-specific training data
Dual-encoder architecture
Employs a dual-encoder design with separate encoding for text and labels, computing matching scores via cosine similarity
Multi-domain adaptability
Trained on the standardized multi-domain UTCD dataset, suitable for various text classification scenarios

Model Capabilities

Zero-shot text classification
Text semantic matching
Multi-category classification

Use Cases

Natural Language Processing
Intent recognition
Identify intent categories from user input, such as weather queries, music playback, etc.
Example: 'Add to playlist' achieved the highest similarity score of 0.72
Text classification
Classify text into unseen categories
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase