Neuralbeagle14 7B 8.0bpw H8 Exl2
NeuralBeagle14-7B is a 7B-parameter large language model fine-tuned using the DPO method based on the Beagle14-7B model, excelling in the 7B parameter category.
Downloads 111
Release Time : 1/17/2024
Model Overview
This model is created by merging fblgit/UNA-TheBeagle-7b-v1 and argilla/distilabeled-Marcoro14-7B-slerp, then fine-tuned with the DPO method, demonstrating outstanding performance on the Open LLM Leaderboard and Nous benchmark.
Model Features
DPO Fine-Tuning
Direct Preference Optimization fine-tuning using the argilla/distilabel-intel-orca-dpo-pairs preference dataset
Model Merging
Combined two high-performance 7B models via LazyMergekit
High Performance
Ranked first among 7B parameter models on the Open LLM Leaderboard and Nous benchmark
Model Capabilities
Text Generation
Dialogue Systems
Instruction Following
Use Cases
Dialogue Systems
Intelligent Assistant
Can be used to build high-performance conversational AI assistants
Excellent performance in dialogue tasks
Text Generation
Content Creation
Can generate high-quality articles, stories, and other content
High-quality text generation
Featured Recommended AI Models