E

E1 Math 1.5B

Developed by Salesforce
E1-Math-1.5B is a language model fine-tuned based on DeepSeek-R1-Distilled-Qwen-1.5B, supporting elastic reasoning and the GRPO method, suitable for budget-constrained deduction scenarios.
Downloads 295
Release Time : 5/7/2025

Model Overview

This model is trained with budget-constrained deduction strategies to achieve elastic reasoning and incorporates the GRPO method, enabling adaptive reasoning during interrupted thought processes without additional training, and generalizing to unseen budget-constrained scenarios.

Model Features

Elastic Reasoning
Supports reasoning under budget constraints, adapting to varying computational resource limits.
GRPO Method
Enables adaptive reasoning during interrupted thought processes, generalizing to unseen budget-constrained scenarios without additional training.
High Performance
Demonstrates high accuracy across multiple token lengths, particularly outperforming the base model at shorter token lengths.

Model Capabilities

Mathematical Reasoning
Elastic Reasoning
Adaptive Reasoning

Use Cases

Academic Research
Mathematical Problem Solving
Used to solve complex mathematical problems, especially in resource-constrained environments.
Demonstrates high accuracy across multiple token lengths.
Education
Mathematics-Assisted Teaching
Helps students understand and solve mathematical problems, providing elastic reasoning support.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase