🚀 基於字符級嵌入的模糊匹配模型
本項目是一個基於字符級標記訓練的孿生BERT架構,用於基於嵌入的模糊匹配。它可應用於模糊匹配、模糊搜索、實體解析、記錄鏈接和結構化數據搜索等場景。
🚀 快速開始
📦 安裝依賴
如果你安裝了 sentence-transformers,使用此模型將變得非常簡單:
pip install -U sentence-transformers
💻 使用示例
基礎用法(Sentence-Transformers)
from sentence_transformers import SentenceTransformer, util
word1 = "fuzzformer"
word1 = " ".join([char for char in word1])
word2 = "fizzformer"
word2 = " ".join([char for char in word2])
words = [word1, word2]
model = SentenceTransformer('shahrukhx01/paraphrase-mpnet-base-v2-fuzzy-matcher')
fuzzy_embeddings = model.encode(words)
print("Fuzzy Match score:")
print(util.cos_sim(fuzzy_embeddings[0], fuzzy_embeddings[1]))
高級用法(HuggingFace Transformers)
import torch
from transformers import AutoTokenizer, AutoModel
from torch import Tensor, device
def cos_sim(a: Tensor, b: Tensor):
"""
borrowed from sentence transformers repo
Computes the cosine similarity cos_sim(a[i], b[j]) for all i and j.
:return: Matrix with res[i][j] = cos_sim(a[i], b[j])
"""
if not isinstance(a, torch.Tensor):
a = torch.tensor(a)
if not isinstance(b, torch.Tensor):
b = torch.tensor(b)
if len(a.shape) == 1:
a = a.unsqueeze(0)
if len(b.shape) == 1:
b = b.unsqueeze(0)
a_norm = torch.nn.functional.normalize(a, p=2, dim=1)
b_norm = torch.nn.functional.normalize(b, p=2, dim=1)
return torch.mm(a_norm, b_norm.transpose(0, 1))
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
word1 = "fuzzformer"
word1 = " ".join([char for char in word1])
word2 = "fizzformer"
word2 = " ".join([char for char in word2])
words = [word1, word2]
tokenizer = AutoTokenizer.from_pretrained('shahrukhx01/paraphrase-mpnet-base-v2-fuzzy-matcher')
model = AutoModel.from_pretrained('shahrukhx01/paraphrase-mpnet-base-v2-fuzzy-matcher')
encoded_input = tokenizer(words, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
fuzzy_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Fuzzy Match score:")
print(cos_sim(fuzzy_embeddings[0], fuzzy_embeddings[1]))
🔗 致謝
非常感謝 Sentence Transformers,他們的實現極大地加速了Fuzzformer的開發。
📖 引用
如果您在工作中引用FuzzTransformer,請使用以下BibTeX引用:
@misc{shahrukhkhan2021fuzzTransformer,
author = {Shahrukh Khan},
title = {FuzzTransformer: A character level embedding based Siamese transformer for fuzzy string matching.},
year = 2021,
publisher = {Coming soon},
doi = {Coming soon},
url = {Coming soon}
}