B

Bert Base Uncased Sparse 85 Unstructured Pruneofa

Developed by Intel
This is a sparse pre-trained model that can be fine-tuned for various language tasks, reducing computational overhead through weight pruning.
Downloads 15
Release Time : 3/2/2022

Model Overview

This model employs a one-shot pruning general method, reducing computational costs by sparsifying the weight matrix while maintaining model performance, suitable for various downstream language tasks.

Model Features

One-Shot Pruning General Method
Adapts to multiple downstream tasks with just one pruning, eliminating the need for task-specific pruning.
85% Weight Sparsity
Achieves matrix sparsity by setting 85% of weights to zero, significantly reducing computational overhead.
Multi-Task Adaptability
Can be fine-tuned for various language tasks such as question answering, natural language inference, and sentiment classification.

Model Capabilities

Text Understanding
Language Model Fine-tuning
Question Answering System Support
Sentiment Analysis
Natural Language Inference

Use Cases

Question Answering Systems
SQuAD Question Answering
Fine-tuned for the Stanford Question Answering Dataset
EM/F1 scores 81.10/88.42
Text Classification
Sentiment Analysis
Used for SST-2 sentiment classification task
Accuracy 91.46%
Natural Language Inference
MNLI Task
Used for multi-genre natural language inference
MNLI-m accuracy 82.71%, MNLI-mm accuracy 83.67%
Featured Recommended AI Models
ยฉ 2025AIbase