N

Nllb 200 Distilled 350M En Ko

Developed by dhtocks
This is a lightweight English-to-Korean translation model optimized from the NLLB-200 600M model, with only 350M parameters and can run on CPU.
Downloads 103
Release Time : 4/25/2024

Model Overview

This model focuses on the English-to-Korean translation task. By reducing the number of layers, it reduces the computational resource requirements and is suitable for users with limited resources.

Model Features

Lightweight model
With only 350M parameters, it is lighter than the original NLLB-200 600M model, reducing the computational resource requirements.
Runnable on CPU
It can run on CPU without mixed precision and quantization, suitable for users with limited resources.
Efficient inference
The inference time is 1.43 seconds on CPU and 0.24 seconds on GPU, suitable for real-time translation needs.

Model Capabilities

English-to-Korean translation
Running in low-resource environments

Use Cases

Translation applications
Real-time text translation
Translate English text into Korean in real-time, suitable for scenarios such as chatting and emails.
The translation quality reaches chrF(++) 24.6
Translation in low-resource environments
Run translation tasks on devices with limited computational resources, such as mobile devices or edge computing devices.
It can run efficiently on CPU
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase