# Language Model Compression
Xtremedistil L6 H384 Uncased Finetuned Wikitext103
MIT
This model is a lightweight distilled version based on Microsoft's Xtremedistil, fine-tuned on the wikitext dataset, suitable for text generation tasks.
Large Language Model
Transformers

X
saghar
18
0
Bert Large Uncased Sparse 90 Unstructured Pruneofa
Apache-2.0
This model is a sparse pre-trained model achieving 90% sparsity through weight pruning techniques, suitable for fine-tuning on various language tasks.
Large Language Model
Transformers English

B
Intel
13
1
Bert Base Uncased Sparse 90 Unstructured Pruneofa
Apache-2.0
This is a sparsely pretrained BERT-Base model achieving 90% weight sparsity through one-shot pruning, suitable for fine-tuning on various language tasks.
Large Language Model
Transformers English

B
Intel
178
0
Featured Recommended AI Models