D

Distilbert Base Uncased Sparse 90 Unstructured Pruneofa

Developed by Intel
This is a sparse pre-trained model achieving 90% weight sparsity through one-shot pruning, suitable for fine-tuning on various language tasks.
Downloads 78
Release Time : 3/2/2022

Model Overview

The model employs a one-shot pruning general method to reduce computational overhead via weight sparsity, ideal for transfer learning.

Model Features

One-shot Pruning General Method
Adapts to multiple downstream tasks with a single pruning, eliminating the need for repeated pruning.
90% Weight Sparsity
Significantly reduces computational overhead through matrix sparsification.
Transfer Learning Friendly
Retains sufficient important information for fine-tuning on various language tasks.

Model Capabilities

Text Understanding
Transfer Learning
Question Answering
Sentiment Analysis
Natural Language Inference

Use Cases

Natural Language Processing
Question Answering
Can be fine-tuned for QA tasks
EM/F1 of 76.91/84.82 on SQuADv1.1
Text Classification
Suitable for classification tasks like sentiment analysis
Accuracy of 90.02% on SST-2
Featured Recommended AI Models
ยฉ 2025AIbase