A

ALMA 7B R

Developed by haoranxu
ALMA-13B-R is a large language model further fine-tuned from ALMA-13B-LoRA using Contrastive Preference Optimization (CPO), specifically designed for machine translation tasks, outperforming GPT-4 and WMT winners
Downloads 281
Release Time : 1/17/2024

Model Overview

ALMA-13B-R is a large language model focused on machine translation, fine-tuned with contrastive preference optimization methods to support high-quality multilingual translation tasks

Model Features

Contrastive Preference Optimization (CPO)
Utilizes innovative contrastive preference optimization for fine-tuning instead of traditional supervised fine-tuning
High-performance Translation
Achieves or surpasses the translation quality of GPT-4 and WMT competition winners
LoRA Fine-tuning
Employs LoRA (Low-Rank Adaptation) technology for efficient fine-tuning

Model Capabilities

High-quality machine translation
Multilingual translation
Text generation

Use Cases

Machine Translation
Chinese-English Translation
Translate Chinese text into English
Achieves translation quality surpassing GPT-4
Multilingual Translation
Supports translation tasks between multiple languages
Performs excellently on WMT competition datasets
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase