S

Suzume Llama 3 8B Multilingual Orpo Borda Half

Developed by lightblue
A multilingual large model fine-tuned via the ORPO method based on Llama-3-8B, trained with 50% of the most consistent ranking data, demonstrating excellent performance in various language tasks.
Downloads 4,625
Release Time : 4/25/2024

Model Overview

This is a multilingual large language model fine-tuned using the ORPO (Odds Ratio Preference Optimization) method, based on the Llama-3-8B architecture, specifically optimized for multilingual comprehension and generation capabilities.

Model Features

ORPO Optimized Training
Fine-tuned using the Odds Ratio Preference Optimization method, significantly improving the model's performance in multilingual tasks.
Multilingual Capabilities
Excellent performance in 6 major languages (Chinese, English, French, German, Japanese, Russian), with some languages surpassing GPT-3.5.
Data Selection
Trained with 50% of the most consistent ranking data to ensure training quality.
Long Context Support
Supports long-context processing of up to 8192 tokens.

Model Capabilities

Multilingual Text Generation
Multilingual Q&A
Multilingual Dialogue Systems
Multilingual Text Understanding

Use Cases

Multilingual Applications
Multilingual Customer Service Chatbot
Build intelligent customer service systems supporting multiple languages.
Achieved the best performance in Russian (8.94 points) and 7.74 points in Chinese in MT-Bench tests.
Multilingual Content Creation
Assist in generating marketing copy, articles, and other content in multiple languages.
Outperformed the base model in French and German tests.
Research Applications
ORPO Method Research
Study the impact of different proportions of training data on model performance.
The 50% data version demonstrated excellent performance in multiple tests.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase