đ BGE_base_3gpp-qa-v2_Matryoshka
This model, fine-tuned from BAAI/bge-base-en-v1.5 on a json dataset, maps sentences and paragraphs to a 768-dimensional dense vector space. It can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
⨠Features
- Semantic Understanding: Maps sentences and paragraphs to a 768-dimensional dense vector space for semantic analysis.
- Versatile Applications: Suitable for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, etc.
đĻ Installation
First, install the Sentence Transformers library:
pip install -U sentence-transformers
đģ Usage Examples
Basic Usage
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("iris49/3gpp-embedding-model-v0")
sentences = [
'What types of data structures are supported by the GET request body on the resource described in table 5.2.11.3.4-2, and how do they influence the request?',
"The data structures supported by the GET request body on the resource are detailed in table 5.2.11.3.4-2. These structures define the format and content of the data that can be sent in the request body. They might include fields such as 'filterCriteria', 'sortOrder', or 'pagination', which influence how the server processes the request and returns the appropriate data.",
"The specific triggers on the Ro interface that can lead to the termination of the IMS service include: 1) Reception of an unsuccessful Operation Result different from DIAMETER_CREDIT_CONTROL_NOT_APPLICABLE in the Debit/Reserve Units Response message. 2) Reception of an unsuccessful Result Code different from DIAMETER_CREDIT_CONTROL_NOT_APPLICABLE within the multiple units operation in the Debit/Reserve Units Response message when only one instance of the multiple units operation field is used. 3) Execution of the termination action procedure as defined in TS 32.299 when only one instance of the Multiple Unit Operation field is used. 4) Execution of the failure handling procedures when the Failure Action is set to 'Terminate' or 'Retry & Terminate'. 5) Reception in the IMS-GWF of an Abort-Session-Request message from OCS.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
đ Documentation
Model Details
Model Description
Property |
Details |
Model Type |
Sentence Transformer |
Base model |
BAAI/bge-base-en-v1.5 |
Maximum Sequence Length |
512 tokens |
Output Dimensionality |
768 dimensions |
Similarity Function |
Cosine Similarity |
Training Dataset |
json |
Language |
en |
License |
apache-2.0 |
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Evaluation
Metrics
Metric |
dim_768 |
dim_512 |
dim_256 |
dim_128 |
dim_64 |
cosine_accuracy@1 |
0.8347 |
0.8341 |
0.8326 |
0.8294 |
0.8211 |
cosine_accuracy@3 |
0.9628 |
0.963 |
0.9624 |
0.9611 |
0.9575 |
cosine_accuracy@5 |
0.9806 |
0.9808 |
0.9802 |
0.9796 |
0.9772 |
cosine_accuracy@10 |
0.9927 |
0.9926 |
0.9923 |
0.9917 |
0.9906 |
cosine_precision@1 |
0.8347 |
0.8341 |
0.8326 |
0.8294 |
0.8211 |
cosine_precision@3 |
0.3209 |
0.321 |
0.3208 |
0.3204 |
0.3192 |
cosine_precision@5 |
0.1961 |
0.1962 |
0.196 |
0.1959 |
0.1954 |
cosine_precision@10 |
0.0993 |
0.0993 |
0.0992 |
0.0992 |
0.0991 |
cosine_recall@1 |
0.8347 |
0.8341 |
0.8326 |
0.8294 |
0.8211 |
cosine_recall@3 |
0.9628 |
0.963 |
0.9624 |
0.9611 |
0.9575 |
cosine_recall@5 |
0.9806 |
0.9808 |
0.9802 |
0.9796 |
0.9772 |
cosine_recall@10 |
0.9927 |
0.9926 |
0.9923 |
0.9917 |
0.9906 |
cosine_ndcg@10 |
0.9235 |
0.9233 |
0.9224 |
0.9205 |
0.9159 |
cosine_mrr@10 |
0.9003 |
0.9 |
0.8989 |
0.8965 |
0.8908 |
cosine_map@100 |
0.9007 |
0.9004 |
0.8993 |
0.897 |
0.8913 |
Training Details
Training Dataset
|
anchor |
positive |
type |
string |
string |
details |
- min: 15 tokens
- mean: 30.56 tokens
- max: 66 tokens
|
- min: 42 tokens
- mean: 109.65 tokens
- max: 298 tokens
|
Training Hyperparameters
eval_strategy
: epoch
per_device_train_batch_size
: 32
per_device_eval_batch_size
: 16
gradient_accumulation_steps
: 16
learning_rate
: 2e-05
num_train_epochs
: 4
lr_scheduler_type
: cosine
warmup_ratio
: 0.1
fp16
: True
load_best_model_at_end
: True
optim
: adamw_torch_fused
batch_sampler
: no_duplicates
Training Logs
Epoch |
Step |
Training Loss |
dim_768_cosine_ndcg@10 |
dim_512_cosine_ndcg@10 |
dim_256_cosine_ndcg@10 |
dim_128_cosine_ndcg@10 |
dim_64_cosine_ndcg@10 |
0.0913 |
10 |
1.4273 |
- |
- |
- |
- |
- |
0.1826 |
20 |
0.5399 |
- |
- |
- |
- |
- |
0.2740 |
30 |
0.1252 |
- |
- |
- |
- |
- |
0.3653 |
40 |
0.0625 |
- |
- |
- |
- |
- |
0.4566 |
50 |
0.0507 |
- |
- |
- |
- |
- |
0.5479 |
60 |
0.0366 |
- |
- |
- |
- |
- |
0.6393 |
70 |
0.029 |
- |
- |
- |
- |
- |
0.7306 |
80 |
0.0239 |
- |
- |
- |
- |
- |
0.8219 |
90 |
0.0252 |
- |
- |
- |
- |
- |
0.9132 |
100 |
0.0237 |
- |
- |
- |
- |
- |
0.9954 |
109 |
- |
0.9199 |
0.9195 |
0.9180 |
0.9150 |
0.9081 |
1.0046 |
110 |
0.026 |
- |
- |
- |
- |
- |
1.0959 |
120 |
0.017 |
- |
- |
- |
- |
- |
1.1872 |
130 |
0.02 |
- |
- |
- |
- |
- |
1.2785 |
140 |
0.0125 |
- |
- |
- |
- |
- |
1.3699 |
150 |
0.0134 |
- |
- |
- |
- |
- |
1.4612 |
160 |
0.0128 |
- |
- |
- |
- |
- |
1.5525 |
170 |
0.0123 |
- |
- |
- |
- |
- |
1.6438 |
180 |
0.0097 |
- |
- |
- |
- |
- |
1.7352 |
190 |
0.0101 |
- |
- |
- |
- |
- |
1.8265 |
200 |
0.0124 |
- |
- |
- |
- |
- |
1.9178 |
210 |
0.0116 |
- |
- |
- |
- |
- |
2.0 |
219 |
- |
0.9220 |
0.9216 |
0.9206 |
0.9184 |
0.9130 |
2.0091 |
220 |
0.012 |
- |
- |
- |
- |
- |
2.1005 |
230 |
0.0111 |
- |
- |
- |
- |
- |
2.1918 |
240 |
0.0101 |
- |
- |
- |
- |
- |
2.2831 |
250 |
0.0101 |
- |
- |
- |
- |
- |
2.3744 |
260 |
0.009 |
- |
- |
- |
- |
- |
2.4658 |
270 |
0.0103 |
- |
- |
- |
- |
- |
2.5571 |
280 |
0.009 |
- |
- |
- |
- |
- |
2.6484 |
290 |
0.0083 |
- |
- |
- |
- |
- |
2.7397 |
300 |
0.0076 |
- |
- |
- |
- |
- |
2.8311 |
310 |
0.0093 |
- |
- |
- |
- |
- |
2.9224 |
320 |
0.0104 |
- |
- |
- |
- |
- |
2.9954 |
328 |
- |
0.9234 |
0.9230 |
0.9221 |
0.9201 |
0.9156 |
3.0137 |
330 |
0.0104 |
- |
- |
- |
- |
- |
3.1050 |
340 |
0.0089 |
- |
- |
- |
- |
- |
3.1963 |
350 |
0.0084 |
- |
- |
- |
- |
- |
3.2877 |
360 |
0.0082 |
- |
- |
- |
- |
- |
3.3790 |
370 |
0.0089 |
- |
- |
- |
- |
- |
3.4703 |
380 |
0.0083 |
- |
- |
- |
- |
- |
3.5616 |
390 |
0.0061 |
- |
- |
- |
- |
- |
3.6530 |
400 |
0.0065 |
- |
- |
- |
- |
- |
3.7443 |
410 |
0.0063 |
- |
- |
- |
- |
- |
3.8356 |
420 |
0.0084 |
- |
- |
- |
- |
- |
3.9269 |
430 |
0.0083 |
- |
- |
- |
- |
- |
3.9817 |
436 |
- |
0.9235 |
0.9233 |
0.9224 |
0.9205 |
0.9159 |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.3.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 1.2.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
đ License
This project is licensed under the apache-2.0 license.
đ Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}