🚀 gte-Qwen1.5-7B-instruct
gte-Qwen1.5-7B-instruct 是 gte 嵌入模型家族的最新成員。該模型基於 Qwen1.5-7B 大語言模型開發,充分借鑑了 Qwen1.5-7B 模型強大的自然語言處理能力。通過我們先進的嵌入訓練技術進行優化,該模型具備了以下關鍵改進:
- 集成雙向注意力機制,增強上下文理解能力。
- 僅在查詢端進行指令微調,提高效率。
- 在涵蓋不同領域和場景的大規模多語言文本語料庫上進行全面訓練。這種訓練結合了弱監督和監督數據,確保模型適用於多種語言和眾多下游任務。
我們還推出了 gte-base-en-v1.5 和 gte-large-en-v1.5,這兩款英語嵌入模型在 MTEB 基準測試中,於相同模型規模類別下取得了最優成績,並且支持最長達 8192 的上下文長度。
✨ 主要特性
- 強大的上下文理解:通過雙向注意力機制,模型能夠更深入地理解文本的上下文信息,從而提供更準確的嵌入表示。
- 高效的指令微調:僅在查詢端進行指令微調,避免了不必要的計算開銷,提高了模型的運行效率。
- 多語言支持:在大規模多語言語料庫上進行訓練,使得模型能夠處理多種語言的文本,適用於全球範圍內的應用場景。
- 廣泛的下游任務適用性:經過全面的訓練,模型能夠在多種下游任務中表現出色,如文本分類、檢索、聚類等。
📦 安裝指南
運行該模型需要安裝以下庫:
transformers>=4.39.2
flash_attn>=2.5.6
💻 使用示例
Sentence Transformers
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Alibaba-NLP/gte-Qwen1.5-7B-instruct", trust_remote_code=True)
model.max_seq_length = 8192
queries = [
"how much protein should a female eat",
"summit define",
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())
你可以查看 config_sentence_transformers.json 瞭解所有預定義的提示名稱。此外,你也可以使用 model.encode(queries, prompt="Instruct: ...\nQuery: "
來使用自定義提示。
Transformers
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, 'summit define')
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen1.5-7B-instruct', trust_remote_code=True)
model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen1.5-7B-instruct', trust_remote_code=True)
max_length = 8192
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
📚 詳細文檔
模型信息
屬性 |
詳情 |
模型規模 |
7B |
嵌入維度 |
4096 |
最大輸入令牌數 |
32k |
評估
MTEB & C-MTEB
你可以使用 scripts/eval_mteb.py 腳本來複現 gte-Qwen1.5-7B-instruct 模型在 MTEB(英語)/C-MTEB(中文)上的以下評估結果:
📄 許可證
本項目採用 Apache 2.0 許可證。
📖 引用
如果你覺得我們的論文或模型有幫助,請考慮引用以下文獻:
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}