🚀 E5-base-en-ru
本项目的E5-base-en-ru
模型是一个用于句子相似度任务的模型,它是对intfloat/multilingual-e5-small
模型进行词汇剪枝后的版本,仅使用俄语和英语的标记,在减少模型大小的同时可能适用于特定语言场景的任务。
📚 详细文档
模型信息
这是 intfloat/multilingual-e5-small 经过词汇剪枝后的版本,仅使用俄语和英语标记。
模型大小
|
intfloat/multilingual-e5-small |
d0rj/e5-small-en-ru |
模型大小(MB) |
448.81 |
170.88 |
参数数量 |
117,653,760 |
44,795,520 |
词嵌入维度 |
96,014,208 |
23,155,968 |
性能
在 SberQuAD 开发基准测试中的性能表现。
SberQuAD 指标(4122 个问题) |
intfloat/multilingual-e5-small |
d0rj/e5-small-en-ru |
recall@3 |
|
|
map@3 |
|
|
mrr@3 |
|
|
recall@5 |
|
|
map@5 |
|
|
mrr@5 |
|
|
recall@10 |
|
|
map@10 |
|
|
mrr@10 |
|
|
💻 使用示例
基础用法
- 检索距离:使用点积距离进行检索。
- 非对称任务:对于诸如开放问答中的段落检索、即席信息检索等非对称任务,分别使用 "query: " 和 "passage: "。
- 对称任务:对于语义相似度、双语挖掘、释义检索等对称任务,使用 "query: " 前缀。
- 特征使用:如果要将嵌入用作特征,如线性探测分类、聚类等,使用 "query: " 前缀。
高级用法
使用transformers
库
直接使用
import torch.nn.functional as F
from torch import Tensor
from transformers import XLMRobertaTokenizer, XLMRobertaModel
def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
'query: How does a corporate website differ from a business card website?',
'query: Где был создан первый троллейбус?',
'passage: The first trolleybus was created in Germany by engineer Werner von Siemens, probably influenced by the idea of his brother, Dr. Wilhelm Siemens, who lived in England, expressed on May 18, 1881 at the twenty-second meeting of the Royal Scientific Society. The electrical circuit was carried out by an eight-wheeled cart (Kontaktwagen) rolling along two parallel contact wires. The wires were located quite close to each other, and in strong winds they often overlapped, which led to short circuits. An experimental trolleybus line with a length of 540 m (591 yards), opened by Siemens & Halske in the Berlin suburb of Halensee, operated from April 29 to June 13, 1882.',
'passage: Корпоративный сайт — содержит полную информацию о компании-владельце, услугах/продукции, событиях в жизни компании. Отличается от сайта-визитки и представительского сайта полнотой представленной информации, зачастую содержит различные функциональные инструменты для работы с контентом (поиск и фильтры, календари событий, фотогалереи, корпоративные блоги, форумы). Может быть интегрирован с внутренними информационными системами компании-владельца (КИС, CRM, бухгалтерскими системами). Может содержать закрытые разделы для тех или иных групп пользователей — сотрудников, дилеров, контрагентов и пр.',
]
tokenizer = XLMRobertaTokenizer.from_pretrained('d0rj/e5-small-en-ru', use_cache=False)
model = XLMRobertaModel.from_pretrained('d0rj/e5-small-en-ru', use_cache=False)
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
使用管道
from transformers import pipeline
pipe = pipeline('feature-extraction', model='d0rj/e5-small-en-ru')
embeddings = pipe(input_texts, return_tensors=True)
embeddings[0].size()
使用sentence-transformers
库
from sentence_transformers import SentenceTransformer
sentences = [
'query: Что такое круглые тензоры?',
'passage: Abstract: we introduce a novel method for compressing round tensors based on their inherent radial symmetry. We start by generalising PCA and eigen decomposition on round tensors...',
]
model = SentenceTransformer('d0rj/e5-small-en-ru')
embeddings = model.encode(sentences, convert_to_tensor=True)
embeddings.size()
📄 许可证
本项目采用 MIT 许可证。