🚀 rawsh/multi-qa-MiniLM-BERT-Tiny-distill-L-2_H-128_A-cos-v1
這是一個 sentence-transformers 模型,它可以將句子和段落映射到一個 128 維的密集向量空間,可用於聚類或語義搜索等任務。該模型基於 nreimers/BERT-Tiny_L-2_H-128_A-2
,教師模型為 multi-qa-MiniLM-L6-cos-v1
。雖然性能一般,但模型大小僅 5MB。
模型信息
屬性 |
詳情 |
模型類型 |
sentence-transformers |
教師模型 |
multi-qa-MiniLM-L6-cos-v1 |
基礎模型 |
nreimers/BERT-Tiny_L-2_H-128_A-2 |
模型大小 |
5MB |
評估結果
2023-06-05 15:28:46 - EmbeddingSimilarityEvaluator: Evaluating the model on sts-dev dataset after epoch 0:
2023-06-05 15:28:47 - Cosine-Similarity : Pearson: 0.7336 Spearman: 0.7582
2023-06-05 15:28:47 - Manhattan-Distance: Pearson: 0.7960 Spearman: 0.7976
2023-06-05 15:28:47 - Euclidean-Distance: Pearson: 0.7968 Spearman: 0.7984
2023-06-05 15:28:47 - Dot-Product-Similarity: Pearson: 0.5599 Spearman: 0.5410
2023-06-05 15:28:48 - MSE evaluation (lower = better) on dataset after epoch 0:
2023-06-05 15:28:48 - MSE (*100): 0.152902
對於該模型的自動評估,請參考 Sentence Embeddings Benchmark: https://seb.sbert.net
🚀 快速開始
安裝依賴
若要使用此模型,需先安裝 sentence-transformers:
pip install -U sentence-transformers
💻 使用示例
基礎用法(Sentence-Transformers)
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
高級用法(HuggingFace Transformers)
若未安裝 sentence-transformers,可以按以下方式使用該模型:首先將輸入數據傳入 Transformer 模型,然後對上下文相關的詞嵌入應用正確的池化操作。
from transformers import AutoTokenizer, AutoModel
import torch
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['This is an example sentence', 'Each sentence is converted']
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
🔧 技術細節
訓練參數
該模型使用以下參數進行訓練:
數據加載器:
torch.utils.data.dataloader.DataLoader
,長度為 141164,參數如下:
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
損失函數:
sentence_transformers.losses.MSELoss.MSELoss
fit()
方法的參數:
{
"epochs": 1,
"evaluation_steps": 5000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
完整模型架構
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 128, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
引用與作者
如需瞭解更多信息,請參考相關文檔。