AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Training-free pruning

# Training-free pruning

Mistral 7B Instruct V0.2 Sparsity 20 V0.1
Apache-2.0
Mistral-7B-Instruct-v0.2 is an instruction-finetuned large language model improved from Mistral-7B-Instruct-v0.1, compressed to 2% sparsity using Wanda pruning method while maintaining competitive performance without retraining.
Large Language Model Transformers
M
wang7776
80
1
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase