đ all-MiniLM-L6-v2
This is a sentence embedding model that maps sentences and paragraphs into a 384-dimensional dense vector space. It can be used for tasks such as clustering and semantic search.
đ Quick Start
This model can be used in two ways: with sentence-transformers
or directly with HuggingFace Transformers
. The following sections provide detailed usage examples for both methods.
⨠Features
- High - efficiency: Based on the MiniLM architecture, it offers fast encoding speed.
- Versatile: Applicable to various tasks like information retrieval, clustering, and sentence similarity calculation.
- Large - scale training: Fine - tuned on a dataset of 1 billion sentence pairs, ensuring strong generalization ability.
đĻ Installation
To use this model with sentence-transformers
, you need to install the library first:
pip install -U sentence-transformers
đģ Usage Examples
Basic Usage (Sentence - Transformers)
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
Advanced Usage (HuggingFace Transformers)
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['This is an example sentence', 'Each sentence is converted']
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
đ Documentation
Evaluation Results
For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net
Background
The project aims to train sentence embedding models on very large sentence - level datasets using a self - supervised contrastive learning objective. We used the pretrained nreimers/MiniLM-L6-H384-uncased
model and fine - tuned it on a 1B sentence pairs dataset. The contrastive learning objective requires the model to predict, given a sentence from a pair, which of a set of randomly sampled other sentences was actually paired with it in the dataset.
This model was developed during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open - to - the - community - community - week - using - jax - flax - for - nlp - cv/7104), organized by Hugging Face, as part of the project [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train - the - best - sentence - embedding - model - ever - with - 1B - training - pairs/7354). We benefited from efficient hardware infrastructure (7 TPUs v3 - 8) and advice from Google's Flax, JAX, and Cloud team members on efficient deep - learning frameworks.
Intended uses
Our model is intended to be used as a sentence and short - paragraph encoder. Given an input text, it outputs a vector that captures the semantic information. The sentence vector can be used for information retrieval, clustering, or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated.
Training procedure
Pre - training
We use the pretrained nreimers/MiniLM-L6-H384-uncased
model. Please refer to the model card for more detailed information about the pre - training procedure.
Fine - tuning
We fine - tune the model using a contrastive objective. Formally, we compute the cosine similarity of each possible sentence pair in the batch and then apply the cross - entropy loss by comparing with the true pairs.
Hyperparameters
We trained the model on a TPU v3 - 8 for 100k steps with a batch size of 1024 (128 per TPU core). We used a learning - rate warm - up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a learning rate of 2e - 5. The full training script is available in the current repository: train_script.py
.
Training data
We used the concatenation of multiple datasets to fine - tune our model. The total number of sentence pairs exceeds 1 billion. We sampled each dataset with a weighted probability, and the configuration is detailed in the data_config.json
file.
Dataset |
Paper |
Number of training tuples |
Reddit comments (2015 - 2018) |
paper |
726,484,430 |
S2ORC Citation pairs (Abstracts) |
paper |
116,288,806 |
WikiAnswers Duplicate question pairs |
paper |
77,427,422 |
PAQ (Question, Answer) pairs |
paper |
64,371,441 |
S2ORC Citation pairs (Titles) |
paper |
52,603,982 |
S2ORC (Title, Abstract) |
paper |
41,769,185 |
Stack Exchange (Title, Body) pairs |
- |
25,316,456 |
Stack Exchange (Title+Body, Answer) pairs |
- |
21,396,559 |
Stack Exchange (Title, Answer) pairs |
- |
21,396,559 |
MS MARCO triplets |
paper |
9,144,553 |
GOOAQ: Open Question Answering with Diverse Answer Types |
paper |
3,012,496 |
Yahoo Answers (Title, Answer) |
paper |
1,198,260 |
Code Search |
- |
1,151,414 |
COCO Image captions |
paper |
828,395 |
SPECTER citation triplets |
paper |
684,100 |
Yahoo Answers (Question, Answer) |
paper |
681,164 |
Yahoo Answers (Title, Question) |
paper |
659,896 |
SearchQA |
paper |
582,261 |
Eli5 |
paper |
325,475 |
Flickr 30k |
paper |
317,695 |
Stack Exchange Duplicate questions (titles) |
|
304,525 |
AllNLI (SNLI and MultiNLI |
paper SNLI, paper MultiNLI |
277,230 |
Stack Exchange Duplicate questions (bodies) |
|
250,519 |
Stack Exchange Duplicate questions (titles + bodies) |
|
250,460 |
Sentence Compression |
paper |
180,000 |
Wikihow |
paper |
128,542 |
Altlex |
paper |
112,696 |
Quora Question Triplets |
- |
103,663 |
Simple Wikipedia |
paper |
102,225 |
Natural Questions (NQ) |
paper |
100,231 |
SQuAD2.0 |
paper |
87,599 |
TriviaQA |
- |
73,346 |
Total |
|
1,170,060,424 |
đ License
This project is licensed under the Apache 2.0 license.