🚀 BERT/MPnet基礎模型(無大小寫區分)
本模型是一個句子轉換器模型,它可以將句子和段落映射到768維的密集向量空間,可用於聚類或語義搜索等任務。
🚀 快速開始
📦 安裝指南
若已安裝句子轉換器,使用該模型會非常便捷:
pip install -U sentence-transformers
💻 使用示例
基礎用法
使用sentence-transformers
庫的示例代碼如下:
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer('abbasgolestani/ag-nli-bert-mpnet-base-uncased-sentence-similarity-v1') nli-mpnet-base-v2
sentences1 = ['I am honored to be given the opportunity to help make our company better',
'I love my job and what I do here',
'I am excited about our company’s vision']
sentences2 = ['I am hopeful about the future of our company',
'My work is aligning with my passion',
'Definitely our company vision will be the next breakthrough to change the world and I’m so happy and proud to work here']
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)
cosine_scores = util.cos_sim(embeddings1, embeddings2)
for i in range(len(sentences1)):
print("{} \t\t {} \t\t Score: {:.4f}".format(sentences1[i], sentences2[i], cosine_scores[i][i]))
高級用法
若未安裝句子轉換器,可按以下方式使用該模型:首先將輸入傳遞給轉換器模型,然後對上下文詞嵌入應用正確的池化操作。
from transformers import AutoTokenizer, AutoModel
import torch
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['This is an example sentence', 'Each sentence is converted']
tokenizer = AutoTokenizer.from_pretrained('abbasgolestani/ag-nli-bert-mpnet-base-uncased-sentence-similarity-v1')
model = AutoModel.from_pretrained('abbasgolestani/ag-nli-bert-mpnet-base-uncased-sentence-similarity-v1')
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
📚 詳細文檔
🔍 評估結果
該模型在包含1000個句子對的本地數據集上進行了評估,此算法在該數據集上的準確率達到了82%。
🔧 技術細節
訓練參數
模型使用以下參數進行訓練:
- 數據加載器:
torch.utils.data.dataloader.DataLoader
,長度為7,參數如下:{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
- 損失函數:
sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss
fit()
方法的參數如下:{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
完整模型架構
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
📄 許可證
文檔中未提及許可證相關信息。
📖 引用與作者
文檔中未提供更多相關信息。