N

Neuralhermes 2.5 Mistral 7B

Developed by mlabonne
NeuralHermes is a large language model based on OpenHermes-2.5-Mistral-7B, further fine-tuned through Direct Preference Optimization (DPO), demonstrating excellent performance across multiple benchmarks.
Downloads 215
Release Time : 11/29/2023

Model Overview

This model is a 7B-parameter large language model that adopts the ChatML template format, focusing on text generation tasks. DPO fine-tuning enhances the original model's performance, making it stand out on the Open LLM Leaderboard.

Model Features

DPO fine-tuning optimization
Fine-tuned the base model using Direct Preference Optimization (DPO), significantly improving model performance
ChatML format support
Adopts the ChatML template format for easy use in chat application scenarios
Leading in multiple benchmarks
Performs exceptionally well on the Open LLM Leaderboard, standing out among 7B-parameter models
Efficient training
Requires only about 1 hour of A100 GPU for training, demonstrating high training efficiency

Model Capabilities

Text generation
Chat dialogue
Q&A system
Instruction following

Use Cases

Intelligent assistant
Chatbot
Can serve as an intelligent chat assistant, providing natural and fluent conversation experiences
Achieved 54.93% accuracy on the TruthfulQA benchmark
Knowledge Q&A
Open-domain Q&A
Answers knowledge-based questions across various domains
Achieved 63.32% accuracy on the MMLU benchmark
Reasoning tasks
Logical reasoning
Handles problems requiring logical reasoning
Achieved 66.55% accuracy on the AI2 Reasoning Challenge
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase