🚀 ko - sbert - sts
這是一個句子轉換器模型:它可以將句子和段落映射到一個768維的密集向量空間,可用於聚類或語義搜索等任務。
🚀 快速開始
本模型可通過兩種方式使用,下面為你詳細介紹。
📦 安裝指南
若要使用此模型,你需要安裝句子轉換器,可使用以下命令進行安裝:
pip install -U sentence-transformers
💻 使用示例
基礎用法(Sentence - Transformers)
安裝完成後,你可以按照以下示例代碼使用模型:
from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]
model = SentenceTransformer('jhgan/ko-sbert-sts')
embeddings = model.encode(sentences)
print(embeddings)
高級用法(HuggingFace Transformers)
如果你沒有安裝句子轉換器,也可以使用該模型。你需要先將輸入數據傳入轉換器模型,然後對上下文詞嵌入應用正確的池化操作。示例代碼如下:
from transformers import AutoTokenizer, AutoModel
import torch
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['This is an example sentence', 'Each sentence is converted']
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sbert-sts')
model = AutoModel.from_pretrained('jhgan/ko-sbert-sts')
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
📚 詳細文檔
評估結果
這是模型在KorSTS訓練數據集上訓練後,在KorSTS評估數據集上的評估結果:
評估指標 |
數值 |
餘弦皮爾遜係數 |
81.55 |
餘弦斯皮爾曼係數 |
81.23 |
歐幾里得皮爾遜係數 |
79.94 |
歐幾里得斯皮爾曼係數 |
79.79 |
曼哈頓皮爾遜係數 |
79.90 |
曼哈頓斯皮爾曼係數 |
79.75 |
點積皮爾遜係數 |
76.02 |
點積斯皮爾曼係數 |
75.31 |
訓練參數
模型的訓練參數如下:
數據加載器:
torch.utils.data.dataloader.DataLoader
,長度為719,參數如下:
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
損失函數:
sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss
fit()
方法的參數:
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 360,
"weight_decay": 0.01
}
完整模型架構
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
引用與作者
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence - BERT: Sentence Embeddings using Siamese BERT - Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)