S

Smaug 72B V0.1

Developed by abacusai
The first open-source large language model to achieve an average score exceeding 80%, fine-tuned from MoMo-72B-lora-1.8.7-DPO using innovative DPO-Positive technology for preference learning optimization
Downloads 119
Release Time : 2/2/2024

Model Overview

An open-source large language model with 72B parameters, excelling in the HuggingFace Open LLM Leaderboard, suitable for complex reasoning, mathematical calculations, and text generation tasks

Model Features

DPO-Positive Optimization
Utilizes innovative DPOP loss function to effectively address probability decline issues of standard DPO on mathematical datasets
High Performance
The first open-source model to achieve an average score exceeding 80% on the HuggingFace Open LLM Leaderboard
Multi-task Optimization
Delivers outstanding performance on diverse datasets including ARC, HellaSwag, and MetaMath

Model Capabilities

Complex text generation
Mathematical problem solving
Programming code generation
Multi-turn dialogue
Knowledge Q&A
Logical reasoning

Use Cases

Content creation
Technical article writing
Generates technical comparison articles with structured outlines
As demonstrated in the smartphone feature comparison example
Educational assistance
Mathematical problem solving
Step-by-step solutions for mathematical problems like probability calculations
As shown in the probability calculation example
Software development
Algorithm implementation
Writes basic algorithm code
As demonstrated in the array intersection search example
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase