🚀 doc2query/msmarco-portuguese-mt5-base-v1
This is a doc2query model based on mT5 (also known as docT5query). It can effectively address lexical search issues and generate training data for embedding models.
🚀 Quick Start
This model can be used in the following two main scenarios:
- Document expansion: Generate 20 - 40 queries for your paragraphs and index them in a standard BM25 index such as Elasticsearch, OpenSearch, or Lucene. The generated queries, containing synonyms, help close the lexical gap in lexical search. Moreover, it re - weights words, giving important words a higher weight even if they rarely appear in a paragraph. Our BEIR paper demonstrated that BM25 + docT5query is a powerful search engine. You can find an example of using docT5query with Pyserini in the BEIR repository.
- Domain Specific Training Data Generation: Generate training data to learn an embedding model. Our GPL - Paper and GPL Example on SBERT.net provide an example of using the model to generate (query, text) pairs for a collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
💻 Usage Examples
Basic Usage
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-portuguese-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python é uma linguagem de programação de alto nível, interpretada de script, imperativa, orientada a objetos, funcional, de tipagem dinâmica e forte. Foi lançada por Guido van Rossum em 1991. Atualmente, possui um modelo de desenvolvimento comunitário, aberto e gerenciado pela organização sem fins lucrativos Python Software Foundation. Apesar de várias partes da linguagem possuírem padrões e especificações formais, a linguagem, como um todo, não é formalmente especificada. O padrão de facto é a implementação CPython."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
⚠️ Important Note
model.generate()
is non - deterministic for top_k/top_n sampling. It produces different queries each time you run it.
🔧 Technical Details
This model fine - tuned [google/mt5 - base](https://huggingface.co/google/mt5 - base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the train_script.py
in this repository.
The input - text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp - dl/mMARCO).
📄 License
This project is licensed under the Apache - 2.0 license.