B

Bert Base Uncased Sparse 90 Unstructured Pruneofa

Developed by Intel
This is a sparsely pretrained BERT-Base model achieving 90% weight sparsity through one-shot pruning, suitable for fine-tuning on various language tasks.
Downloads 178
Release Time : 3/2/2022

Model Overview

This model employs a one-shot universal pruning method for sparsification, preserving critical information while reducing computational overhead, and can be fine-tuned for downstream tasks like QA and natural language inference.

Model Features

High Sparsity
Achieves 90% weight sparsity through pruning, significantly reducing computational resource requirements
One-shot Universal Pruning
Adapts to multiple downstream tasks with just one pruning, eliminating the need for task-specific re-pruning
Performance Retention
Maintains model performance while achieving high sparsity, balancing efficiency and accuracy

Model Capabilities

Text Understanding
Language Representation Learning
Transfer Learning

Use Cases

Natural Language Processing
Question Answering Systems
Can be fine-tuned for building QA systems
Achieves 79.83 EM/87.25 F1 on SQuADv1.1
Text Classification
Applicable for sentiment analysis and other text classification tasks
Achieves 90.88% accuracy on SST-2
Natural Language Inference
Suitable for genre natural language inference tasks
Achieves 81.45%/82.43% accuracy on MNLI-m/mm
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase