Blockchainlabs 7B Merged Test2 4 Prune Sft 4bit DPO Orca
B
Blockchainlabs 7B Merged Test2 4 Prune Sft 4bit DPO Orca
Developed by alnrg2arg
This is a small 7B-parameter LLM optimized for device-side use, pruned and trained with DPO
Downloads 18
Release Time : 1/23/2024
Model Overview
This model is a 7B-parameter language model based on the Mistral architecture, optimized through merging, pruning (50% sparsity), and DPO training to reduce model size while maintaining performance, suitable for deployment on resource-constrained devices
Model Features
Device-side optimization
Significantly reduces model size through 50% sparsity pruning, suitable for deployment on resource-limited devices
DPO training
Trained using Direct Preference Optimization (DPO) method to improve output quality
Efficient inference
Utilizes 8-bit optimization and AdamW optimizer to enhance inference efficiency
Model Capabilities
English text generation
Instruction following
Dialogue generation
Use Cases
Mobile applications
Device-side chat assistant
Deploy lightweight chatbots on mobile devices like smartphones
Reduces resource usage while maintaining response quality
Edge computing
Localized text processing
Perform text generation and processing on edge devices without cloud dependency
Enhances privacy protection and response speed
Featured Recommended AI Models
Š 2025AIbase