R

Relullama 7B

Developed by SparseLLM
A ReLU-activated sparse large language model fine-tuned based on Llama 2 7B, improving computational efficiency through dynamic parameter selection
Downloads 5,323
Release Time : 11/28/2023

Model Overview

A sparse large language model using ReLU activation function, achieving efficient inference through knowledge distillation and joint optimization, suitable for English text processing tasks

Model Features

Sparse Computation Optimization
Uses ReLU activation function to achieve MoE-like selective parameter activation, improving computational efficiency
Joint Optimization Training
Simultaneously performs language modeling and knowledge distillation tasks to prevent overfitting and enhance generalization
Efficient Inference Support
Compatible with PowerInfer inference framework, supporting CPU sparse inference acceleration

Model Capabilities

English Text Generation
Language Understanding
Question Answering System
Knowledge Reasoning

Use Cases

Efficiency Optimization
CPU Environment Inference Acceleration
Achieves efficient inference in resource-constrained environments
Sparse inference speed reaches 8.21 tokens/sec (i9-13900K)
Academic Research
Sparse Computation Research
Provides a foundational model for sparse large language model algorithm research
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase