Mistral 7B Instruct V0.2 Sparsity 30 V0.1
Apache-2.0
Mistral-7B-Instruct-v0.2 is an enhanced instruction fine-tuned large language model based on Mistral-7B-Instruct-v0.1, achieving 30% sparsity through Wanda pruning method without requiring retraining while maintaining competitive performance.
Large Language Model
Transformers