D

Distilroberta Base

Developed by distilbert
DistilRoBERTa is a distilled version of the RoBERTa-base model with fewer parameters but faster speed, suitable for English text processing tasks.
Downloads 1.2M
Release Time : 3/2/2022

Model Overview

A Transformer-based language model compressed from RoBERTa-base using knowledge distillation technology, maintaining most performance while improving inference speed.

Model Features

Efficient Inference
Approximately 2x faster inference speed compared to the original RoBERTa-base
Lightweight Design
34% reduction in parameter count (from 125 million to 82 million)
Knowledge Distillation Technology
Uses the same training process as DistilBERT, retaining over 90% of the teacher model's GLUE performance

Model Capabilities

Masked language modeling
Text classification
Sequence labeling
Question answering systems

Use Cases

Text Understanding
Sentiment Analysis
Classify reviews as positive/negative sentiment
Achieves 92.5% accuracy on SST-2 dataset
Text Similarity Calculation
Measure semantic similarity between two texts
Scores 88.3 on STS-B dataset
Information Extraction
Named Entity Recognition
Identify entities like person names, locations from text
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase