S

Sqft Phi 3 Mini 4k 60 Base

Developed by IntelLabs
A sparsified model based on microsoft/Phi-3-mini-4k-instruct, achieving 60% sparsity using the Wanda method, without quantization.
Downloads 110
Release Time : 4/28/2024

Model Overview

This model is obtained by applying 60% sparsity to Phi-3-mini-4k-instruct using the Wanda sparsification method, suitable for scenarios requiring efficient inference.

Model Features

Efficient Sparsification
Achieves 60% sparsity using the Wanda method, significantly reducing the number of model parameters.
Preserved Original Performance
Maintains the performance of the original Phi-3-mini model as much as possible during sparsification.
Hardware-Friendly
The sparsification design makes it more suitable for efficient operation on specific hardware.

Model Capabilities

Text Generation
Language Understanding
Instruction Following

Use Cases

Efficient Inference
Edge Device Deployment
Deploy the sparsified model on resource-constrained edge devices for text processing.
Reduces memory usage and computational overhead
Real-time Applications
Real-time text generation applications requiring fast response.
Improves inference speed
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase