B

Bert Base Uncased Squadv1.1 Sparse 80 1x4 Block Pruneofa

Developed by Intel
This model is a BERT-Base fine-tuned for QA tasks, using 80% 1x4 block-sparse pre-training combined with knowledge distillation.
Downloads 27
Release Time : 3/2/2022

Model Overview

A sparse pre-trained Transformer language model trained with weight pruning and model distillation, suitable for QA system tasks.

Model Features

Sparse Pre-training
Uses 80% 1x4 block-sparse pre-training to reduce model parameters while maintaining performance.
Knowledge Distillation
Incorporates knowledge distillation to enhance model performance on downstream tasks.
Transfer Learning
Supports transferring knowledge from sparse pre-trained models to various downstream tasks.

Model Capabilities

Question Answering
Natural Language Understanding

Use Cases

Question Answering
Reading Comprehension
Answer questions based on given text.
Achieves 81.29% exact match and 88.47% F1 score on SQuAD dataset.
Featured Recommended AI Models
ยฉ 2025AIbase