🚀 MiniCPM
MiniCPM is a series of end-side large language models jointly open-sourced by ModelBest and the Natural Language Processing Laboratory of Tsinghua University. It offers high performance with relatively fewer parameters, making it a competitive choice in various language tasks.
✨ Features
MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 1.2B non-token embedding parameters in its main language model MiniCPM-1B.
- After SFT, MiniCPM performs similarly to Mistral-7B on public comprehensive evaluation sets, excelling in Chinese, math, and coding abilities. Overall, it outperforms models like Llama2-13B, MPT-30B, and Falcon-40B.
- After DPO, on the MTBench evaluation set, which is the most user-oriented at present, MiniCPM-2B surpasses many representative open-source large models such as Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, and Zephyr-7B-alpha.
- Based on MiniCPM-2B, the end-side multi-modal large model MiniCPM-V achieves the best overall performance among models of the same scale. It outperforms existing multi-modal large models based on Phi-2 and reaches or even exceeds the performance of 9.6B Qwen-VL-Chat on some evaluation sets.
- After Int4 quantization, MiniCPM can be deployed and inferred on mobile phones, with a streaming output speed slightly higher than the human speaking speed. MiniCPM-V is also the first multi-modal large model to be deployed on mobile phones.
- Parameter efficient finetuning can be conducted with a single 1080/2080 GPU, and full parameter finetuning can be done with a 3090/4090 GPU. This makes the secondary development cost of MiniCPM relatively low.
We fully open-source the model parameters of MiniCPM-2B for academic research and limited commercial use. We also provide all checkpoints during training and most non-proprietary data for model mechanism research.
- SFT and DPO versions based on MiniCPM-2B and human preference: MiniCPM-2B-SFT/DPO
- The multi-modal model MiniCPM-V based on MiniCPM-2B, which outperforms models with similar size, i.e., Phi-2
- The INT4 quantized version MiniCPM-2B-SFT/DPO-Int4 based on MiniCPM-2B-SFT/DPO
- Mobile phone application based on MLC-LLM and LLMFarm. Both language model and multimodel model can conduct inference on smartphones.
📚 Documentation
Evaluation Results
Detailed evaluation results are in github repo.
⚠️ Important Note
We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended. We are investigating the cause now.
Limitations
- Due to the limited model scale, the model may experience hallucinatory issues. Since the DPO model generates longer responses, it is more prone to hallucinations. We will continue to iterate and improve the MiniCPM model.
- To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models.
- Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts.
- Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability.
📦 Installation
Install transformers>=4.36.0
and accelerate
before using the model.
💻 Usage Examples
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM-2B-sft-bf16'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.8, top_p=0.8)
print(responds)
Expected Output
山东省最高的山是泰山,海拔1545米。
相对于黄山(海拔1864米),泰山海拔较低,相差约319米。
⚠️ Important Note
It is necessary to specify the data type of the model clearly in 'from_pretrained', otherwise large calculation errors will be caused.
📄 License
Model LICENSE
- This repository is released under the Apache-2.0 License.
- The usage of MiniCPM model weights must strictly follow the General Model License (GML).
- The models and weights of MiniCPM are completely free for academic research.
- If you intend to utilize the model for commercial purposes, please reach out to cpm@modelbest.cn to obtain the certificate of authorization.
Statement
As a language model, MiniCPM generates content by learning from a vast amount of text. However, it does not possess the ability to comprehend or express personal opinions or value judgments. Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
📚 Citation
If you find MiniCPM helpful for your work, please consider citing the following technical report.
@inproceedings{minicpm2024,
title={MiniCPM:Unveiling the Potential of End-side Large Language Models},
booktitle={OpenBMB Blog},
year={2024}
}
📦 Model Download