🚀 多語言-e5-small-4096
這是 intfloat/multilingual-e5-small 的 局部-稀疏-全局 版本,能夠處理多達約4000個標記。
🚀 快速開始
✨ 主要特性
- 支持多種語言,包括南非語(af)、阿姆哈拉語(am)、阿拉伯語(ar)等眾多語言。
- 基於 局部-稀疏-全局 架構,可處理多達約4000個標記。
💻 使用示例
基礎用法
以下是一個對MS-MARCO段落排名數據集中的查詢和段落進行編碼的示例:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('efederici/multilingual-e5-small-4096', {"trust_remote_code": True})
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
高級用法
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(
last_hidden_states: Tensor,
attention_mask: Tensor
) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
tokenizer = AutoTokenizer.from_pretrained('efederici/multilingual-e5-small-4096')
model = AutoModel.from_pretrained('efederici/multilingual-e5-small-4096', trust_remote_code=True)
batch_dict = tokenizer(input_texts, max_length=4096, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
📚 詳細文檔
引用信息
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
模型信息
屬性 |
詳情 |
模型類型 |
句子相似度模型 |
支持語言 |
南非語(af)、阿姆哈拉語(am)、阿拉伯語(ar)等眾多語言 |
處理能力 |
可處理多達約4000個標記 |