🚀 paraphrase-spanish-distilroberta
This is a sentence-transformers model that maps sentences and paragraphs to a 768-dimensional dense vector space. It can be used for tasks such as clustering or semantic search.
🚀 Quick Start
This model simplifies sentence and paragraph processing by converting them into a 768-dimensional dense vector space. It's highly useful for clustering and semantic search tasks.
✨ Features
- Teacher-Student Approach: Trained using a teacher-student transfer learning approach with parallel EN-ES sentence pairs.
- Multi-Language Compatibility: A bilingual Spanish-English model suitable for cross - language tasks.
- Versatile Applications: Can be used for clustering, semantic search, information retrieval, and sentence similarity tasks.
📦 Installation
Using this model becomes easy when you have sentence-transformers installed:
pip install -U sentence-transformers
💻 Usage Examples
Basic Usage
from sentence_transformers import SentenceTransformer
sentences = ["Este es un ejemplo", "Cada oración es transformada"]
model = SentenceTransformer('hackathon-pln-es/paraphrase-spanish-distilroberta')
embeddings = model.encode(sentences)
print(embeddings)
Advanced Usage
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling - operation on - top of the contextualized word embeddings.
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['Este es un ejemplo", "Cada oración es transformada']
tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta')
model = AutoModel.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta')
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
📚 Documentation
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
Evaluation Results
Similarity Evaluation on STS - 2017.es - en.txt and STS - 2017.es - es.txt (translated manually for evaluation purposes). We measure the semantic textual similarity (STS) between sentence pairs in different languages:
ES - ES
cosine_pearson |
cosine_spearman |
manhattan_pearson |
manhattan_spearman |
euclidean_pearson |
euclidean_spearman |
dot_pearson |
dot_spearman |
0.8495 |
0.8579 |
0.8675 |
0.8474 |
0.8676 |
0.8478 |
0.8277 |
0.8258 |
ES - EN
cosine_pearson |
cosine_spearman |
manhattan_pearson |
manhattan_spearman |
euclidean_pearson |
euclidean_spearman |
dot_pearson |
dot_spearman |
0.8344 |
0.8448 |
0.8279 |
0.8168 |
0.8282 |
0.8159 |
0.8083 |
0.8145 |
Intended uses
Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
Background
This model is a bilingual Spanish - English model trained according to instructions in the paper Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation and the documentation accompanying its companion python package. We have used the strongest available pretrained English Bi - Encoder ([paraphrase - mpnet - base - v2](https://www.sbert.net/docs/pretrained_models.html#sentence - embedding - models)) as a teacher model, and the pretrained Spanish [BERTIN](https://huggingface.co/bertin - project/bertin - roberta - base - spanish) as the student model.
We developed this model during the Hackathon 2022 NLP - Spanish, organized by hackathon - pln - es Organization.
Training data
We use the concatenation from multiple datasets with sentence pairs (EN - ES). We could check out the dataset that was used during training: [parallel - sentences](https://huggingface.co/datasets/hackathon - pln - es/parallel - sentences)
Dataset |
AllNLI - ES (SNLI + MultiNLI) |
EuroParl |
JW300 |
News Commentary |
Open Subtitles |
TED 2020 |
Tatoeba |
WikiMatrix |
👥 Authors