ALP DeepScaleR 1.5B C16K
A
ALP DeepScaleR 1.5B C16K
Developed by SynthLabsAI
ALP_DeepScaleR_1.5B_C16K is a model trained using the Adaptive Length Penalty (ALP) method based on the DeepScaleR-1.5B model, which can significantly reduce token usage while maintaining performance.
Downloads 333
Release Time : 5/27/2025
Model Overview
This model optimizes token usage efficiency through the adaptive length penalty technique and is suitable for tasks such as mathematical reasoning and competition problem solving, supporting a 16K long context window.
Model Features
Adaptive Length Penalty (ALP)
Reduce token usage by approximately 50% through ALP technology, significantly improving inference efficiency
Long context support
Support a long context window of 16K tokens, suitable for handling complex problems
Mathematical reasoning optimization
Perform excellently on mathematical datasets such as MATH and AIME
Model Capabilities
Mathematical problem solving
Competition problem solving
Step-by-step reasoning
Long text processing
Use Cases
Education
Mathematics competition tutoring
Solve mathematical competition problems such as AMC/AIME
Achieve an accuracy of 0.80 on the MATH-500 dataset
Mathematics learning assistant
Solve complex mathematical problems step by step
Support the output of the final answer in the \\boxed{} format
Research
Mathematical reasoning research
Used for benchmark testing of mathematical reasoning models
Achieve an accuracy of 0.51 on OlympiadBench
Featured Recommended AI Models
Š 2025AIbase