🚀 Flax Sentence Embeddings Model
This project focuses on training sentence embedding models on large sentence-level datasets using self-supervised contrastive learning. The model can output semantic vectors for sentences, which are useful for various NLP tasks.
🚀 Quick Start
This model is designed to be used as a sentence encoder. Given an input sentence, it outputs a vector that captures the semantic information of the sentence. The sentence vector can be used for information retrieval, clustering, or sentence similarity tasks.
✨ Features
- Self - supervised Contrastive Learning: Trained on large - scale sentence - level datasets using self - supervised contrastive learning, enabling the model to capture rich semantic information.
- Fine - tuned on 1B Sentence Pairs: Fine - tuned on a dataset of 1 billion sentence pairs, enhancing the model's generalization ability.
- Efficient Hardware Utilization: Trained using 7 TPUs v3 - 8, with support from Google's Flax, JAX, and Cloud teams.
💻 Usage Examples
Basic Usage
Here is how to use this model to get the features of a given text using SentenceTransformers library:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v4_mpnet-base')
text = "Replace me by any text you'd like."
text_embbedding = model.encode(text)
📚 Documentation
Model description
The project aims to train sentence embedding models on very large sentence - level datasets using a self - supervised contrastive learning objective. We used the pretrained ['mpnet - base'](https://huggingface.co/microsoft/mpnet - base) model and fine - tuned it on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences was actually paired with it in our dataset.
We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open - to - the - community - community - week - using - jax - flax - for - nlp - cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train - the - best - sentence - embedding - model - ever - with - 1B - training - pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3 - 8, as well as intervention from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks.
Intended uses
Our model is intended to be used as a sentence encoder. Given an input sentence, it outputs a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering, or sentence similarity tasks.
Training procedure
Pre - training
We use the pretrained ['mpnet - base'](https://huggingface.co/microsoft/mpnet - base). Please refer to the model card for more detailed information about the pre - training procedure.
Fine - tuning
We fine - tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pair from the batch. We then apply the cross - entropy loss by comparing with true pairs.
Hyper parameters
We trained our model on a TPU v3 - 8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm - up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e - 5 learning rate. The full training script is accessible in this current repository.
Training data
We use the concatenation of multiple datasets to fine - tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability, the configuration of which is detailed in the data_config.json
file.
Property |
Details |
Model Type |
Sentence Encoder |
Training Data |
Concatenation of multiple datasets with over 1 billion sentence pairs. See the table below for details. |
Dataset |
Paper |
Number of training tuples |
GOOAQ: Open Question Answering with Diverse Answer Types |
paper |
3,012,496 |
Stack Exchange |
- |
364,001 |
Flickr 30k |
paper |
317,695 |
[COCO 2020](COCO 2020) |
paper |
828,395 |
Code Search |
- |
1,151,414 |
TriviaqQA |
- |
73,346 |
SQuAD2.0 |
paper |
87,599 |
Natural Questions (NQ) |
paper |
100,231 |
Simple Wikipedia |
paper |
102,225 |
Quora Question Pairs |
- |
103,663 |
Altlex |
paper |
112,696 |
Wikihow |
paper |
128,542 |
Sentence Compression |
paper |
180,000 |
AllNLI (SNLI and MultiNLI |
paper SNLI, paper MultiNLI |
277,230 |
Eli5 |
paper |
325,475 |
SPECTER |
paper |
684,100 |
S2ORC Title/Abstract |
paper |
41,769,185 |
S2ORC Citation/Citation |
paper |
52,603,982 |
S2ORC Citation/Abstract |
paper |
116,288,806 |
PAQ |
paper |
64,371,441 |
WikiAnswers |
paper |
77,427,422 |
SearchQA |
- |
582,261 |
Yahoo Answers Title/Answer |
paper |
1,198,260 |
Yahoo Answers Title/Question |
paper |
659,896 |
Yahoo Answers Question/Answer |
paper |
681,164 |
MS MARCO |
paper |
9,144,553 |
Reddit conversationnal |
paper |
726,484,430 |
total |
|
1,097,953,922 |