AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Weight Pruning

# Weight Pruning

Bert Large Uncased Wwm Squadv2 X2.15 F83.2 D25 Hybrid V1
MIT
This model is pruned using the nn_pruning library, retaining 32% of the original weights, with a processing speed 2.15 times faster than the original version and an F1 score of 83.22
Question Answering System Transformers English
B
madlag
21
0
Bert Base Uncased Sparse 85 Unstructured Pruneofa
Apache-2.0
This is a sparse pre-trained model that can be fine-tuned for various language tasks, reducing computational overhead through weight pruning.
Large Language Model Transformers English
B
Intel
15
0
Bert Base Uncased Sparse 90 Unstructured Pruneofa
Apache-2.0
This is a sparsely pretrained BERT-Base model achieving 90% weight sparsity through one-shot pruning, suitable for fine-tuning on various language tasks.
Large Language Model Transformers English
B
Intel
178
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase