🚀 MiniCPM
MiniCPM is a series of end-side large language models jointly open-sourced by ModelBest and the Natural Language Processing Laboratory of Tsinghua University. It offers high performance with relatively fewer parameters and can be deployed on mobile devices.
🚀 Quick Start
MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. After various fine-tuning processes, it demonstrates excellent performance in multiple benchmarks, comparable to or even surpassing many large-scale models. It can also be deployed on smartphones, and the development cost based on it is relatively low.
✨ Features
- High Performance: After SFT, MiniCPM performs similarly to Mistral - 7B on public comprehensive benchmarks, with better performance in Chinese, math, and coding. It outperforms models like Llama2 - 13B, MPT - 30B, and Falcon - 40B. After DPO, MiniCPM - 2B surpasses many representative open - source large models such as Llama2 - 70B - Chat, Vicuna - 33B, Mistral - 7B - Instruct - v0.1, and Zephyr - 7B - alpha on the MTBench.
- Multimodal Capability: The end - side multimodal large model MiniCPM - V, based on MiniCPM - 2B, achieves the best overall performance among models of the same scale, surpassing existing multimodal large models based on Phi - 2. It reaches comparable or better performance than 9.6B Qwen - VL - Chat on some benchmarks.
- Mobile Deployment: After Int4 quantization, MiniCPM can be deployed and inferred on mobile phones, with a streaming output speed slightly higher than the human speaking speed. MiniCPM - V is the first multi - modal model that can be deployed on smartphones.
- Low Development Cost: Parameter efficient finetuning can be conducted with a single 1080/2080 GPU, and full parameter finetuning can be conducted with a 3090/4090 GPU.
📦 Installation
Run the following code after installing transformers>=4.36.0
and accelerate
.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM-2B-dpo-bf16'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.8, top_p=0.8)
print(responds)
⚠️ Important Note
It is necessary to specify the data type of the model clearly in from_pretrained
, otherwise large calculation errors will be caused.
💻 Usage Examples
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM-2B-dpo-bf16'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.8, top_p=0.8)
print(responds)
Expected Output
山东省最高的山是泰山,海拔1545米。
相对于黄山(海拔1864米),泰山海拔较低,相差约319米。
📚 Documentation
Evaluation Results
Detailed evaluation results are in github repo.
⚠️ Important Note
We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended. We are investigating the cause now.
Limitations
- Hallucinatory Issues: Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer responses, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model.
- Identity Information: To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open - source corpus as part of the training data, the model may output identity information similar to the GPT series models.
- Inconsistent Output: Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts.
- Knowledge Memory: Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability.
📦 Model Download
Property |
Details |
HuggingFace |
sft - bf16, sft - fp32, dpo - bf16, dpo - fp16, dpo - fp32 |
ModelScope |
sft - bf16, sft - fp32, dpo - bf16, dpo - fp16, dpo - fp32 |
WiseModel |
sft - bf16, sft - fp32, dpo - bf16, dpo - fp16, dpo - fp32 |
📄 License
Model LICENSE
- This repository is released under the Apache - 2.0 License.
- The usage of MiniCPM model weights must strictly follow the General Model License (GML).
- The models and weights of MiniCPM are completely free for academic research.
- If you intend to utilize the model for commercial purposes, please reach out to cpm@modelbest.cn to obtain the certificate of authorization.
Statement
As a language model, MiniCPM generates content by learning from a vast amount of text. However, it does not possess the ability to comprehend or express personal opinions or value judgments. Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
📚 Citation
Please cite our technical report if you find our work valuable.
@inproceedings{minicpm2024,
title={MiniCPM:Unveiling the Potential of End-side Large Language Models},
booktitle={OpenBMB Blog},
year={2024}
}