đ bespin-global/klue-sroberta-base-continue-learning-by-mnr
This model utilizes the KLUE/NLI and KLUE/STS datasets. It was trained through the continue-learning method introduced in the official documentation of sentence-transformers as follows:
- After negative sampling using the NLI dataset, perform the first round of NLI training with MultipleNegativeRankingLoss.
- For the model trained in step 1, perform the second round of STS training using the STS dataset with CosineSimilarityLoss.
For more details about the training, please refer to the Blog and Colab practice code.
This is a sentence-transformers model: It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for tasks like clustering or semantic search.
đ Quick Start
⨠Features
- Utilizes KLUE/NLI and KLUE/STS datasets.
- Trained through a two - step process with different loss functions.
- Maps text to a 768 - dimensional dense vector space for various NLP tasks.
đĻ Installation
Using this model becomes easy when you have sentence-transformers installed:
pip install -U sentence-transformers
đģ Usage Examples
Basic Usage
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer("bespin-global/klue-sroberta-base-continue-learning-by-mnr")
embeddings = model.encode(sentences)
print(embeddings)
Advanced Usage
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling - operation on - top of the contextualized word embeddings.
from transformers import AutoTokenizer, AutoModel
import torch
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['This is an example sentence', 'Each sentence is converted']
tokenizer = AutoTokenizer.from_pretrained("bespin-global/klue-sroberta-base-continue-learning-by-mnr")
model = AutoModel.from_pretrained("bespin-global/klue-sroberta-base-continue-learning-by-mnr")
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
đ Documentation
Evaluation Results
EmbeddingSimilarityEvaluator: Evaluating the model on sts - test dataset:
- Cosine - Similarity :
- Pearson: 0.8901 Spearman: 0.8893
- Manhattan - Distance:
- Pearson: 0.8867 Spearman: 0.8818
- Euclidean - Distance:
- Pearson: 0.8875 Spearman: 0.8827
- Dot - Product - Similarity:
- Pearson: 0.8786 Spearman: 0.8735
- Average : 0.8892573547643868
Training
The model was trained with the parameters:
DataLoader:
torch.utils.data.dataloader.DataLoader
of length 329 with parameters:
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
Loss:
sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss
Parameters of the fit() - Method:
{
"epochs": 4,
"evaluation_steps": 32,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 132,
"weight_decay": 0.01
}
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
đ License
This model is licensed under the CC - BY - 4.0 license.
Citing & Authors
JaeHyeong AN at Bespin Global
Information Table
Property |
Details |
Pipeline Tag |
sentence - similarity |
Tags |
sentence - transformers, feature - extraction, sentence - similarity, transformers |
Datasets |
klue |
Language |
ko |
License |
cc - by - 4.0 |