A

ALMA 13B

Developed by haoranxu
ALMA is an advanced translator based on large language models, employing a two-stage training paradigm (monolingual fine-tuning + parallel corpus optimization). The 13B-LoRA version achieves optimal performance through LoRA fine-tuning on the LLaMA-2-13B foundation.
Downloads 855
Release Time : 9/17/2023

Model Overview

A machine translation model based on the LLaMA-2-13B architecture, achieving high-quality translation through an innovative two-stage fine-tuning approach (monolingual pre-training + human parallel corpus fine-tuning).

Model Features

Two-stage Training Paradigm
First fine-tuned with large-scale monolingual data, then optimized with high-quality human parallel corpus, significantly improving translation quality.
LoRA Fine-tuning Technique
The 13B version employs the parameter-efficient LoRA fine-tuning method, reducing resource requirements while maintaining performance.
Contrastive Preference Optimization (ALMA-R)
The new version uses the CPO algorithm for preference learning, outperforming GPT-4 and WMT champion models.

Model Capabilities

Text Translation
Cross-language Conversion
High-quality Human-level Translation

Use Cases

Professional Translation
Technical Document Translation
Accurately convert technical documents between different languages
Achieves professional human translation level
Literary Content Translation
Handle translations of literary texts
Preserves original style and semantic accuracy
Localization Services
Product Localization
Provide multilingual support for globalized products
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase