🚀 Model Card for vectorizer.guava
This model, vectorizer.guava
, developed by Sinequa, is a vectorizer that generates an embedding vector for a given passage or query. Passage vectors are stored in the vector index, and the query vector is used to retrieve relevant passages from the index at query time.
🚀 Quick Start
This section provides a high - level overview of the model's functionality and its application in generating embedding vectors for passages and queries.
✨ Features
- Multilingual Support: Trained and tested in multiple languages including English, French, German, Spanish, Italian, Dutch, Japanese, Portuguese, Chinese (simplified and traditional), and Polish. It also offers basic support for 91 additional languages used in the base model's pretraining.
- Efficient Inference: Provides inference times for different GPUs (NVIDIA A10, T4, L4) and quantization types (FP16, FP32) with varying batch sizes.
- Low Memory Usage: Details the GPU memory usage of the model, excluding the ONNX Runtime initialization memory.
📦 Installation
Requirements
- Minimal Sinequa version: 11.10.0
- Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
- Cuda compute capability: above 5.0 (above 6.0 for FP16 use)
📚 Documentation
Supported Languages
The model was trained and tested in the following languages:
- English
- French
- German
- Spanish
- Italian
- Dutch
- Japanese
- Portuguese
- Chinese (simplified)
- Chinese (traditional)
- Polish
Basic support can be expected for an additional 91 languages used during the pretraining of the base model (see Appendix A of XLM - R paper).
Scores
Metric |
Value |
English Relevance (Recall@100) |
0.616 |
Note that the relevance scores are computed as an average over several retrieval datasets (see [details below](#evaluation - metrics)).
Inference Times
GPU |
Quantization type |
Batch size 1 |
Batch size 32 |
NVIDIA A10 |
FP16 |
1 ms |
5 ms |
NVIDIA A10 |
FP32 |
2 ms |
18 ms |
NVIDIA T4 |
FP16 |
1 ms |
12 ms |
NVIDIA T4 |
FP32 |
3 ms |
52 ms |
NVIDIA L4 |
FP16 |
2 ms |
5 ms |
NVIDIA L4 |
FP32 |
4 ms |
24 ms |
Gpu Memory usage
Quantization type |
Memory |
FP16 |
550 MiB |
FP32 |
1050 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU.
Model Details
Overview
- Number of parameters: 107 million
- Base language model: [mMiniLMv2 - L6 - H384 - distilled - from - XLMR - Large](https://huggingface.co/nreimers/mMiniLMv2 - L6 - H384 - distilled - from - XLMR - Large) (Paper, GitHub)
- Insensitive to casing and accents
- Output dimensions: 256 (reduced with an additional dense layer)
- Training procedure: Query - passage - negative triplets for datasets that have mined hard negative data, Query - passage pairs for the rest. Number of negatives is augmented with in - batch negative strategy
Training Data
The model was trained using all datasets cited in the [all - MiniLM - L6 - v2](https://huggingface.co/sentence - transformers/all - MiniLM - L6 - v2) model. Additionally, it was trained on the datasets cited in this paper for the first 9 aforementioned languages. It was also trained on [this dataset](https://huggingface.co/datasets/clarin - knext/msmarco - pl) for polish capacities, and a translated version of msmarco - zh for traditional chinese capacities.
Evaluation Metrics
English
To determine the relevance score, we averaged the results obtained when evaluating on the datasets of the [BEIR benchmark](https://github.com/beir - cellar/beir). Note that all these datasets are in English.
Dataset |
Recall@100 |
Average |
0.616 |
|
|
Arguana |
0.956 |
CLIMATE - FEVER |
0.471 |
DBPedia Entity |
0.379 |
FEVER |
0.824 |
FiQA - 2018 |
0.642 |
HotpotQA |
0.579 |
MS MARCO |
0.85 |
NFCorpus |
0.289 |
NQ |
0.765 |
Quora |
0.993 |
SCIDOCS |
0.467 |
SciFact |
0.899 |
TREC - COVID |
0.104 |
Webis - Touche - 2020 |
0.407 |
Traditional Chinese
This model has traditional chinese capacities, that are being evaluated over the same dev set at msmarco - zh, translated in traditional chinese.
Dataset |
Recall@100 |
msmarco - zh - traditional |
0.738 |
In comparison, raspberry scores a 0.693 on this dataset.
Other languages
We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project - miracl/miracl) to test its multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics for the existing languages.
Language |
Recall@100 |
French |
0.672 |
German |
0.594 |
Spanish |
0.632 |
Japanese |
0.603 |
Chinese (simplified) |
0.702 |