# Contrastive Preference Optimization
ALMA 7B R
MIT
ALMA-13B-R is a large language model further fine-tuned from ALMA-13B-LoRA using Contrastive Preference Optimization (CPO), specifically designed for machine translation tasks, outperforming GPT-4 and WMT winners
Machine Translation
Transformers

A
haoranxu
281
14
ALMA 13B R
MIT
ALMA-13B-R is a machine translation model developed based on the ALMA model, utilizing Contrastive Preference Optimization (CPO) for LoRA fine-tuning, with performance surpassing GPT-4 and WMT champion models.
Machine Translation
Transformers

A
haoranxu
4,216
81
ALMA 13B Pretrain
MIT
ALMA is a two-stage trained large language model translation system based on LLaMA-2-13B, significantly improving translation performance through an innovative paradigm of monolingual data fine-tuning + parallel corpus optimization
Machine Translation
Transformers

A
haoranxu
3,491
10
ALMA 13B
MIT
ALMA is an advanced translator based on large language models, employing a two-stage training paradigm (monolingual fine-tuning + parallel corpus optimization). The 13B-LoRA version achieves optimal performance through LoRA fine-tuning on the LLaMA-2-13B foundation.
Machine Translation
Transformers

A
haoranxu
855
36
ALMA 7B
MIT
ALMA-13B-R is an advanced translation model based on large language models, fine-tuned using Contrastive Preference Optimization (CPO), capable of matching or even surpassing GPT-4 or WMT competition winners.
Machine Translation
Transformers

A
haoranxu
256
25
Featured Recommended AI Models