đ kamalkraj/BioSimCSE-BioLinkBERT-BASE
This is a sentence-transformers model that maps sentences and paragraphs to a 768-dimensional dense vector space, which can be used for tasks such as clustering or semantic search.
đ Quick Start
This section provides two ways to use the model: with sentence-transformers
and with HuggingFace Transformers
.
⨠Features
- Maps sentences and paragraphs to a 768-dimensional dense vector space.
- Can be used for clustering or semantic search tasks.
đĻ Installation
To use this model, you need to install sentence-transformers
:
pip install -U sentence-transformers
đģ Usage Examples
Basic Usage
If you have sentence-transformers
installed, you can use the model like this:
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('kamalkraj/BioSimCSE-BioLinkBERT-BASE')
embeddings = model.encode(sentences)
print(embeddings)
Advanced Usage
Without sentence-transformers
, you can use the model as follows. First, pass your input through the transformer model, then apply the appropriate pooling operation on top of the contextualized word embeddings.
from transformers import AutoTokenizer, AutoModel
import torch
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['This is an example sentence', 'Each sentence is converted']
tokenizer = AutoTokenizer.from_pretrained('kamalkraj/BioSimCSE-BioLinkBERT-BASE')
model = AutoModel.from_pretrained('kamalkraj/BioSimCSE-BioLinkBERT-BASE')
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
đ Documentation
Evaluation Results
For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net
Training
The model was trained with the following parameters:
DataLoader:
torch.utils.data.dataloader.DataLoader
of length 7708 with parameters:
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
Loss:
sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss
with parameters:
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
Parameters of the fit()-Method:
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 771,
"weight_decay": 0.01
}
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
đ License
No license information provided in the original document.
đ Citing & Authors
@inproceedings{kanakarajan-etal-2022-biosimcse,
title = "{B}io{S}im{CSE}: {B}io{M}edical Sentence Embeddings using Contrastive learning",
author = "Kanakarajan, Kamal raj and
Kundumani, Bhuvana and
Abraham, Abhijith and
Sankarasubbu, Malaikannan",
booktitle = "Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.louhi-1.10",
pages = "81--86",
abstract = "Sentence embeddings in the form of fixed-size vectors that capture the information in the sentence as well as the context are critical components of Natural Language Processing systems. With transformer model based sentence encoders outperforming the other sentence embedding methods in the general domain, we explore the transformer based architectures to generate dense sentence embeddings in the biomedical domain. In this work, we present BioSimCSE, where we train sentence embeddings with domain specific transformer based models with biomedical texts. We assess our model{'}s performance with zero-shot and fine-tuned settings on Semantic Textual Similarity (STS) and Recognizing Question Entailment (RQE) tasks. Our BioSimCSE model using BioLinkBERT achieves state of the art (SOTA) performance on both tasks.",
}
Additional Information
Property |
Details |
Pipeline Tag |
sentence-similarity |
Tags |
sentence-transformers, feature-extraction, sentence-similarity, transformers |
Widget Example |
Source Sentence: "The up-regulation of miR-146a was also detected in cervical cancer tissues."; Sentences: ["The expression of miR-146a has been found to be up-regulated in cervical cancer.", "Only concomitant ablation of ERK1 and ERK2 impairs tumor growth."]; Example Title: "BioNLP Example" |