Minicpm Embedding
模型概述
模型特點
模型能力
使用案例
🚀 MiniCPM-Embedding
MiniCPM-Embedding 是面壁智能與清華大學自然語言處理實驗室(THUNLP)、東北大學信息檢索小組(NEUIR)共同開發的中英雙語言文本嵌入模型。它具備出色的中文、英文檢索能力,以及出色的中英跨語言檢索能力,能為文本檢索任務提供高效且精準的解決方案。
🚀 快速開始
輸入格式
本模型支持 query 側指令,格式如下:
Instruction: {{ instruction }} Query: {{ query }}
例如:
Instruction: 為這個醫學問題檢索相關回答。Query: 咽喉癌的成因是什麼?
Instruction: Given a claim about climate change, retrieve documents that support or refute the claim. Query: However the warming trend is slower than most climate models have forecast.
也可以不提供指令,即採取如下格式:
Query: {{ query }}
我們在 BEIR 與 C-MTEB/Retrieval 上測試時使用的指令見 instructions.json
,其他測試不使用指令。文檔側直接輸入文檔原文。
環境要求
transformers==4.37.2
示例腳本
Huggingface Transformers
from transformers import AutoModel, AutoTokenizer
import torch
import torch.nn.functional as F
model_name = "openbmb/MiniCPM-Embedding"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float16).to("cuda")
# You can also use the following line to enable the Flash Attention 2 implementation
# model = AutoModel.from_pretrained(model_name, trust_remote_code=True, attn_implementation="flash_attention_2", torch_dtype=torch.float16).to("cuda")
model.eval()
# 由於在 `model.forward` 中縮放了最終隱層表示,此處的 mean pooling 實際上起到了 weighted mean pooling 的作用
# As we scale hidden states in `model.forward`, mean pooling here actually works as weighted mean pooling
def mean_pooling(hidden, attention_mask):
s = torch.sum(hidden * attention_mask.unsqueeze(-1).float(), dim=1)
d = attention_mask.sum(dim=1, keepdim=True).float()
reps = s / d
return reps
@torch.no_grad()
def encode(input_texts):
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True).to("cuda")
outputs = model(**batch_dict)
attention_mask = batch_dict["attention_mask"]
hidden = outputs.last_hidden_state
reps = mean_pooling(hidden, attention_mask)
embeddings = F.normalize(reps, p=2, dim=1).detach().cpu().numpy()
return embeddings
queries = ["中國的首都是哪裡?"]
passages = ["beijing", "shanghai"]
INSTRUCTION = "Query: "
queries = [INSTRUCTION + query for query in queries]
embeddings_query = encode(queries)
embeddings_doc = encode(passages)
scores = (embeddings_query @ embeddings_doc.T)
print(scores.tolist()) # [[0.3535913825035095, 0.18596848845481873]]
Sentence Transformers
import torch
from sentence_transformers import SentenceTransformer
model_name = "openbmb/MiniCPM-Embedding"
model = SentenceTransformer(model_name, trust_remote_code=True, model_kwargs={ "torch_dtype": torch.float16})
# You can also use the following line to enable the Flash Attention 2 implementation
# model = SentenceTransformer(model_name, trust_remote_code=True, attn_implementation="flash_attention_2", model_kwargs={ "torch_dtype": torch.float16})
queries = ["中國的首都是哪裡?"]
passages = ["beijing", "shanghai"]
INSTRUCTION = "Query: "
embeddings_query = model.encode(queries, prompt=INSTRUCTION)
embeddings_doc = model.encode(passages)
scores = (embeddings_query @ embeddings_doc.T)
print(scores.tolist()) # [[0.35365450382232666, 0.18592746555805206]]
✨ 主要特性
- 出色的中文、英文檢索能力。
- 出色的中英跨語言檢索能力。
📦 安裝指南
確保你的環境中安裝了 transformers==4.37.2
,可使用以下命令進行安裝:
pip install transformers==4.37.2
📚 詳細文檔
模型訓練
MiniCPM-Embedding 基於 MiniCPM-2B-sft-bf16 訓練,結構上採取雙向注意力和 Weighted Mean Pooling [1]。採取多階段訓練方式,共使用包括開源數據、機造數據、閉源數據在內的約 600 萬條訓練數據。
RAG 套件系列
歡迎關注 RAG 套件系列:
- 檢索模型:MiniCPM-Embedding
- 重排模型:MiniCPM-Reranker
- 面向 RAG 場景的 LoRA 插件:MiniCPM3-RAG-LoRA
模型信息
屬性 | 詳情 |
---|---|
模型類型 | 中英雙語言文本嵌入模型 |
模型大小 | 2.4B |
嵌入維度 | 2304 |
最大輸入token數 | 512 |
基礎模型 | openbmb/MiniCPM-2B-sft-bf16 |
🔧 技術細節
模型結構上採取雙向注意力和 Weighted Mean Pooling [1],並採取多階段訓練方式。在 model.forward
中縮放了最終隱層表示,使得示例腳本中的 mean pooling 實際上起到了 weighted mean pooling 的作用。
[1] Muennighoff, N. (2022). Sgpt: Gpt sentence embeddings for semantic search. arXiv preprint arXiv:2202.08904.
📄 許可證
- 本倉庫中代碼依照 Apache-2.0 協議開源。
- MiniCPM-Embedding 模型權重的使用則需要遵循 MiniCPM 模型協議。
- MiniCPM-Embedding 模型權重對學術研究完全開放。如需將模型用於商業用途,請填寫此問卷。
💻 使用示例
基礎用法
以下是使用 Huggingface Transformers 庫調用 MiniCPM-Embedding 模型進行文本嵌入編碼的基礎示例:
from transformers import AutoModel, AutoTokenizer
import torch
import torch.nn.functional as F
model_name = "openbmb/MiniCPM-Embedding"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float16).to("cuda")
model.eval()
def mean_pooling(hidden, attention_mask):
s = torch.sum(hidden * attention_mask.unsqueeze(-1).float(), dim=1)
d = attention_mask.sum(dim=1, keepdim=True).float()
reps = s / d
return reps
@torch.no_grad()
def encode(input_texts):
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True).to("cuda")
outputs = model(**batch_dict)
attention_mask = batch_dict["attention_mask"]
hidden = outputs.last_hidden_state
reps = mean_pooling(hidden, attention_mask)
embeddings = F.normalize(reps, p=2, dim=1).detach().cpu().numpy()
return embeddings
queries = ["中國的首都是哪裡?"]
passages = ["beijing", "shanghai"]
INSTRUCTION = "Query: "
queries = [INSTRUCTION + query for query in queries]
embeddings_query = encode(queries)
embeddings_doc = encode(passages)
scores = (embeddings_query @ embeddings_doc.T)
print(scores.tolist())
高級用法
若要啟用 Flash Attention 2 實現,可在加載模型時添加相應參數:
from transformers import AutoModel, AutoTokenizer
import torch
import torch.nn.functional as F
model_name = "openbmb/MiniCPM-Embedding"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name, trust_remote_code=True, attn_implementation="flash_attention_2", torch_dtype=torch.float16).to("cuda")
model.eval()
# 後續代碼與基礎用法相同
📊 實驗結果
中文與英文檢索結果
模型 | C-MTEB/Retrieval (NDCG@10) | BEIR (NDCG@10) |
---|---|---|
bge-large-zh-v1.5 | 70.46 | - |
gte-large-zh | 72.49 | - |
Zhihui_LLM_Embedding | 76.74 | |
bge-large-en-v1.5 | - | 54.29 |
gte-en-large-v1.5 | - | 57.91 |
NV-Retriever-v1 | - | 60.9 |
bge-en-icl | - | 62.16 |
NV-Embed-v2 | - | 62.65 |
me5-large | 63.66 | 51.43 |
bge-m3(Dense) | 65.43 | 48.82 |
gte-multilingual-base(Dense) | 71.95 | 51.08 |
gte-Qwen2-1.5B-instruct | 71.86 | 58.29 |
gte-Qwen2-7B-instruct | 76.03 | 60.25 |
bge-multilingual-gemma2 | 73.73 | 59.24 |
MiniCPM-Embedding | 76.76 | 58.56 |
MiniCPM-Embedding+MiniCPM-Reranker | 77.08 | 61.61 |
中英跨語言檢索結果
模型 | MKQA En-Zh_CN (Recall@20) | NeuCLIR22 (NDCG@10) | NeuCLIR23 (NDCG@10) |
---|---|---|---|
me5-large | 44.3 | 9.01 | 25.33 |
bge-m3(Dense) | 66.4 | 30.49 | 41.09 |
gte-multilingual-base(Dense) | 68.2 | 39.46 | 45.86 |
gte-Qwen2-1.5B-instruct | 68.52 | 49.11 | 45.05 |
gte-Qwen2-7B-instruct | 68.27 | 49.14 | 49.6 |
MiniCPM-Embedding | 72.95 | 52.65 | 49.95 |
MiniCPM-Embedding+MiniCPM-Reranker | 74.33 | 53.21 | 54.12 |







