🚀 多语言-e5-small-4096
这是 intfloat/multilingual-e5-small 的 局部-稀疏-全局 版本,能够处理多达约4000个标记。
🚀 快速开始
✨ 主要特性
- 支持多种语言,包括南非语(af)、阿姆哈拉语(am)、阿拉伯语(ar)等众多语言。
- 基于 局部-稀疏-全局 架构,可处理多达约4000个标记。
💻 使用示例
基础用法
以下是一个对MS-MARCO段落排名数据集中的查询和段落进行编码的示例:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('efederici/multilingual-e5-small-4096', {"trust_remote_code": True})
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
高级用法
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(
last_hidden_states: Tensor,
attention_mask: Tensor
) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
tokenizer = AutoTokenizer.from_pretrained('efederici/multilingual-e5-small-4096')
model = AutoModel.from_pretrained('efederici/multilingual-e5-small-4096', trust_remote_code=True)
batch_dict = tokenizer(input_texts, max_length=4096, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
📚 详细文档
引用信息
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
模型信息
属性 |
详情 |
模型类型 |
句子相似度模型 |
支持语言 |
南非语(af)、阿姆哈拉语(am)、阿拉伯语(ar)等众多语言 |
处理能力 |
可处理多达约4000个标记 |