Model Overview
Model Features
Model Capabilities
Use Cases
đ Yi - Building the Next Generation of Open - Source and Bilingual LLMs
The Yi series of models is a next - generation open - source large language model developed from scratch. It offers high - performance in both English and Chinese across various benchmarks, and comes in different sizes to meet diverse user needs.
đ Quick Start
Choose your path
You can start using Yi models through different methods, such as pip
, docker
, llama.cpp
, conda - lock
, or the web demo.
Quick start - pip
(Details about pip installation should be provided here if available in the original content)
Quick start - docker
(Details about docker installation should be provided here if available in the original content)
Quick start - llama.cpp
(Details about llama.cpp usage should be provided here if available in the original content)
Quick start - conda - lock
(Details about conda - lock usage should be provided here if available in the original content)
Web demo
You can try some of the models interactively at:
⨠Features
Introduction
- đ¤ The Yi series models are the next generation of open - source large language models trained from scratch by 01.AI.
- đ Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example:
- The Yi - 34B - Chat model landed in second place (following GPT - 4 Turbo), outperforming other LLMs (such as GPT - 4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024).
- The Yi - 34B model ranked first among all existing open - source models (such as Falcon - 180B, Llama - 70B, Claude) in both English and Chinese on various benchmarks, including Hugging Face Open LLM Leaderboard (pre - trained) and C - Eval (based on data available up to November 2023).
- đ (Credits to Llama) Thanks to the Transformer and Llama open - source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem.
If you're interested in Yi's adoption of Llama architecture and license usage policy, see Yi's relation with Llama. âŦī¸
- Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018.
- Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state - of - the - art open - source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.
- Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.
- However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights.
- As Llama's structure is employed by the majority of open - source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.
- Developing in a unique and proprietary way, Yi has independently created its own high - quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu - lab.github.io/alpaca_eval/).
đĄ TL;DR
The Yi series models adopt the same model architecture as Llama but are NOT derivatives of Llama.
News
đĨ 2024 - 07 - 29: The Yi Cookbook 1.0 is released, featuring tutorials and examples in both Chinese and English.
đ¯ 2024 - 05 - 13: The Yi - 1.5 series models are open - sourced, further improving coding, math, reasoning, and instruction - following abilities.
đ¯ 2024 - 03 - 16: The Yi - 9B - 200K
is open - sourced and available to the public.
đ¯ 2024 - 03 - 08: Yi Tech Report is published!
đ 2024 - 03 - 07: The long text capability of the Yi - 34B - 200K has been enhanced.
In the "Needle - in - a - Haystack" test, the Yi - 34B - 200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre - train the model on 5B tokens long - context data mixture and demonstrate a near - all - green performance.
đ¯ 2024 - 03 - 06: The Yi - 9B
is open - sourced and available to the public.
Yi - 9B
stands out as the top performer among a range of similar - sized open - source models (including Mistral - 7B, SOLAR - 10.7B, Gemma - 7B, DeepSeek - Coder - 7B - Base - v1.5 and more), particularly excelling in code, math, common - sense reasoning, and reading comprehension.
đ¯ 2024 - 01 - 23: The Yi - VL models, Yi - VL - 34B
and Yi - VL - 6B
, are open - sourced and available to the public.
Yi - VL - 34B
has ranked first among all existing open - source models in the latest benchmarks, including MMMU and CMMMU (based on data available up to January 2024).
đ¯ 2023 - 11 - 23: Chat models are open - sourced and available to the public.
This release contains two chat models based on previously released base models, two 8 - bit models quantized by GPTQ, and two 4 - bit models quantized by AWQ.
Yi - 34B - Chat
Yi - 34B - Chat - 4bits
Yi - 34B - Chat - 8bits
Yi - 6B - Chat
Yi - 6B - Chat - 4bits
Yi - 6B - Chat - 8bits
You can try some of them interactively at:
đ 2023 - 11 - 23: The Yi Series Models Community License Agreement is updated to v2.1.
đ¯ 2023 - 11 - 05: The base models, Yi - 6B - 200K
and Yi - 34B - 200K
, are open - sourced and available to the public.
This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K.
đ¯ 2023 - 11 - 02: The base models, Yi - 6B
and Yi - 34B
, are open - sourced and available to the public.
The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time.
Models
Yi models come in multiple sizes and cater to different use cases. You can also fine - tune Yi models to meet your specific requirements.
If you want to deploy Yi models, make sure you meet the software and hardware requirements.
Chat models
Model | Download |
---|---|
Yi - 34B - Chat | âĸ đ¤ Hugging Face âĸ đ¤ ModelScope âĸ đŖ wisemodel |
Yi - 34B - Chat - 4bits | âĸ đ¤ Hugging Face âĸ đ¤ ModelScope âĸ đŖ wisemodel |
Yi - 34B - Chat - 8bits | âĸ đ¤ Hugging Face âĸ đ¤ ModelScope âĸ đŖ wisemodel |
Yi - 6B - Chat | âĸ đ¤ Hugging Face âĸ đ¤ ModelScope âĸ đŖ wisemodel |
Yi - 6B - Chat - 4bits | âĸ đ¤ Hugging Face âĸ đ¤ ModelScope âĸ đŖ wisemodel |
Yi - 6B - Chat - 8bits | âĸ đ¤ Hugging Face âĸ đ¤ ModelScope âĸ đŖ wisemodel |
đ Documentation
How to use Yi?
- Fine - tuning: (Details about fine - tuning should be provided here if available in the original content)
- Quantization: (Details about quantization should be provided here if available in the original content)
- Deployment: (Details about deployment should be provided here if available in the original content)
- FAQ: (Details about frequently asked questions should be provided here if available in the original content)
- Learning hub: (Details about the learning hub should be provided here if available in the original content)
Why Yi?
Ecosystem
Upstream
(Details about the upstream ecosystem should be provided here if available in the original content)
Downstream
- Serving: (Details about serving should be provided here if available in the original content)
- Quantization: (Details about quantization in the downstream ecosystem should be provided here if available in the original content)
- Fine - tuning: (Details about fine - tuning in the downstream ecosystem should be provided here if available in the original content)
- API: (Details about the API should be provided here if available in the original content)
Benchmarks
- Base model performance: (Details about the base model performance should be provided here if available in the original content)
- Chat model performance: (Details about the chat model performance should be provided here if available in the original content)
Tech report
- Citation: (Details about the citation of the tech report should be provided here if available in the original content)
Who can use Yi?
(Details about who can use Yi should be provided here if available in the original content)
đ License
The license for the Yi models is Apache - 2.0.

