Model Overview
Model Features
Model Capabilities
Use Cases
đ Yi: Building the Next Generation of Open-Source and Bilingual LLMs
The Yi series is a collection of open-source large language models developed from scratch. It offers high performance in language understanding, reasoning, and other aspects, with different models suitable for various application scenarios.
đ Quick Start
Choose your path
You can start using Yi through different methods such as pip
, docker
, llama.cpp
, conda-lock
, or the web demo.
Quick start - pip
(No specific pip installation command is provided in the original text, so this part is skipped according to the rules.)
Quick start - docker
(No specific docker installation command is provided in the original text, so this part is skipped according to the rules.)
Quick start - llama.cpp
(No specific llama.cpp installation command is provided in the original text, so this part is skipped according to the rules.)
Quick start - conda-lock
(No specific conda - lock installation command is provided in the original text, so this part is skipped according to the rules.)
Web demo
You can try some of the models interactively at:
⨠Features
Introduction
- đ¤ The Yi series models are the next generation of open - source large language models trained from scratch by 01.AI.
- đ Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example:
- The Yi - 34B - Chat model landed in second place (following GPT - 4 Turbo), outperforming other LLMs (such as GPT - 4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024).
- The Yi - 34B model ranked first among all existing open - source models (such as Falcon - 180B, Llama - 70B, Claude) in both English and Chinese on various benchmarks, including Hugging Face Open LLM Leaderboard (pre - trained) and C - Eval (based on data available up to November 2023).
- đ (Credits to Llama) Thanks to the Transformer and Llama open - source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem.
If you're interested in Yi's adoption of Llama architecture and license usage policy, see Yi's relation with Llama. âŦī¸
- Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018.
- Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state - of - the - art open - source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.
- Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.
- However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights.
- As Llama's structure is employed by the majority of open - source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.
- Developing in a unique and proprietary way, Yi has independently created its own high - quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu - lab.github.io/alpaca_eval/).
đĄ TL;DR
The Yi series models adopt the same model architecture as Llama but are NOT derivatives of Llama.
News
đĨ 2024-07-29: The Yi Cookbook 1.0 is released, featuring tutorials and examples in both Chinese and English.
đ¯ 2024-05-13: The Yi-1.5 series models are open - sourced, further improving coding, math, reasoning, and instruction - following abilities.
đ¯ 2024-03-16: The Yi-9B-200K
is open - sourced and available to the public.
đ¯ 2024-03-08: Yi Tech Report is published!
đ 2024-03-07: The long text capability of the Yi-34B-200K has been enhanced.
In the "Needle - in - a - Haystack" test, the Yi - 34B - 200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre - train the model on 5B tokens long - context data mixture and demonstrate a near - all - green performance.
đ¯ 2024-03-06: The Yi-9B
is open - sourced and available to the public.
Yi-9B
stands out as the top performer among a range of similar - sized open - source models (including Mistral - 7B, SOLAR - 10.7B, Gemma - 7B, DeepSeek - Coder - 7B - Base - v1.5 and more), particularly excelling in code, math, common - sense reasoning, and reading comprehension.
đ¯ 2024-01-23: The Yi - VL models, Yi-VL-34B
and Yi-VL-6B
, are open - sourced and available to the public.
Yi-VL-34B
has ranked first among all existing open - source models in the latest benchmarks, including MMMU and CMMMU (based on data available up to January 2024).
đ¯ 2023-11-23: Chat models are open - sourced and available to the public.
This release contains two chat models based on previously released base models, two 8 - bit models quantized by GPTQ, and two 4 - bit models quantized by AWQ.
Yi-34B-Chat
Yi-34B-Chat-4bits
Yi-34B-Chat-8bits
Yi-6B-Chat
Yi-6B-Chat-4bits
Yi-6B-Chat-8bits
You can try some of them interactively at:
đ 2023-11-23: The Yi Series Models Community License Agreement is updated to v2.1.
đ¯ 2023-11-05: The base models, Yi-6B-200K
and Yi-34B-200K
, are open - sourced and available to the public.
This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K.
đ¯ 2023-11-02: The base models, Yi-6B
and Yi-34B
, are open - sourced and available to the public.
The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time.
Models
Yi models come in multiple sizes and cater to different use cases. You can also fine - tune Yi models to meet your specific requirements.
If you want to deploy Yi models, make sure you meet the software and hardware requirements.
Chat models
Model | Download |
---|---|
Yi-34B-Chat | âĸ đ¤ Hugging Face âĸ đ¤ ModelScope âĸ đŖ wisemodel |
Yi-34B-Chat-4bits | âĸ đ¤ Hugging Face âĸ đ¤ ModelScope âĸ đŖ wisemodel |
Yi-34B-Chat-8bits | âĸ đ¤ Hugging Face âĸ đ¤ ModelScope âĸ đŖ wisemodel |
Yi-6B-Chat | âĸ đ¤ Hugging Face âĸ đ¤ ModelScope âĸ đŖ wisemodel |
Yi-6B-Chat-4bits | âĸ đ¤ Hugging Face âĸ đ¤ ModelScope âĸ đŖ wisemodel |
Yi-6B-Chat-8bits | âĸ đ¤ Hugging Face âĸ đ¤ ModelScope âĸ đŖ wisemodel |
đ Documentation
How to use Yi?
Fine - tuning
(No specific fine - tuning content is provided in the original text, so this part is skipped according to the rules.)
Quantization
(No specific quantization content is provided in the original text, so this part is skipped according to the rules.)
Deployment
(No specific deployment content is provided in the original text, so this part is skipped according to the rules.)
FAQ
(No specific FAQ content is provided in the original text, so this part is skipped according to the rules.)
Learning hub
(No specific learning hub content is provided in the original text, so this part is skipped according to the rules.)
Why Yi?
Ecosystem
Upstream
(No specific upstream content is provided in the original text, so this part is skipped according to the rules.)
Downstream
Serving
(No specific serving content is provided in the original text, so this part is skipped according to the rules.)
Quantization
(No specific quantization content is provided in the original text, so this part is skipped according to the rules.)
Fine - tuning
(No specific fine - tuning content is provided in the original text, so this part is skipped according to the rules.)
API
(No specific API content is provided in the original text, so this part is skipped according to the rules.)
Benchmarks
Base model performance
(No specific base model performance content is provided in the original text, so this part is skipped according to the rules.)
Chat model performance
(No specific chat model performance content is provided in the original text, so this part is skipped according to the rules.)
Tech report
Citation
(No specific citation content is provided in the original text, so this part is skipped according to the rules.)
Who can use Yi?
(No specific content is provided in the original text, so this part is skipped according to the rules.)
đ License
The license of this project is Apache - 2.0.

