L

Laser Dolphin Mixtral 2x7b Dpo

Developed by macadeliccc
A medium-scale Mixture of Experts (MoE) implementation based on Dolphin-2.6-Mistral-7B-DPO-Laser, with an average performance improvement of approximately 1 point in evaluations
Downloads 133
Release Time : 1/8/2024

Model Overview

This is a text generation model based on a Mixture of Experts architecture, optimized through laser processing, suitable for various natural language processing tasks.

Model Features

Mixture of Experts Architecture
Utilizes a 2x7B parameter Mixture of Experts architecture to balance performance and efficiency
Laser Processing Optimization
Optimizes model performance through laser processing technology
Quantization Support
Provides multiple quantized versions, including ExLlamav2, GGUF, and AWQ formats
Performance Improvement
Achieves an average performance improvement of approximately 1 point compared to the previous version

Model Capabilities

Text Generation
Code Generation
Question Answering System
Reasoning Tasks

Use Cases

Programming Assistance
Code Generation
Generates Python code based on natural language descriptions
Capable of generating common code such as quicksort algorithms
Education
Problem-solving Assistance
Solves mathematical and logical problems
Achieves an accuracy of 48.29% on the GSM8k math test set
General Question Answering
Fact-based Question Answering
Answers fact-based questions
Scores 60.76 on the TruthfulQA test set (mc2 score)
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase