Deephermes 3 Mistral 24B Preview
D

Deephermes 3 Mistral 24B Preview

Preview version of DeepHermes 3 based on Mistral architecture from Nous Research, enhanced for reasoning and complex task performance. 24B parameters specifically optimized for enhanced reasoning capabilities, inheriting DeepHermes series advantages in logical reasoning. As preview version, showcases latest open-source community advances in high-quality reasoning models, suitable for research and development requiring strong reasoning.
Intelligence(Relatively Weak)
Speed(Slow)
Input Supported Modalities
Yes
Is Reasoning Model
32,000
Context Window
-
Maximum Output Tokens
-
Knowledge Cutoff

Pricing

- /M tokens
Input
- /M tokens
Output
- /M tokens
Blended Price

Quick Simple Comparison

Input

Output

Hermes 3 - Llama-3.1 70B
DeepHermes 3 - Llama-3.1 8B Preview
DeepHermes 3 - Mistral 24B Preview

Basic Parameters

DeepHermes 3 - Mistral 24B PreviewTechnical Parameters
Parameter Count
Not Announced
Context Length
32.00k tokens
Training Data Cutoff
Open Source Category
Open Weights (Permissive License)
Multimodal Support
Text Only
Throughput
Release Date
2025-03-13
Response Speed
0 tokens/s

Benchmark Scores

Below is the performance of DeepHermes 3 - Mistral 24B Preview in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
29.99
Large Language Model Intelligence Level
Coding Index
21.14
Indicator of AI model performance on coding tasks
Math Index
32.07
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
58
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
38.2
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
3.9
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
19.5
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
22.8
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
74.6
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
59.5
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
4.7
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase