đ upskyy/e5-base-korean
This model is a fine - tuned model on korsts and kornli, derived from [intfloat/multilingual - e5 - base](https://huggingface.co/intfloat/multilingual - e5 - base). It maps sentences and paragraphs into a 768 - dimensional dense vector space, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, etc.
đ Quick Start
This model is a fine - tuned version of intfloat/multilingual - e5 - base
on korsts
and kornli
datasets. It can map sentences and paragraphs into a 768 - dimensional dense vector space, enabling various NLP tasks such as semantic similarity calculation and text classification.
⨠Features
- Multilingual Support: Supports multiple languages including Korean, making it suitable for cross - language NLP tasks.
- High - Dimensional Embeddings: Maps text to a 768 - dimensional dense vector space for accurate semantic representation.
- Versatile Applications: Can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, etc.
đĻ Installation
First, you need to install the sentence - transformers
library:
pip install -U sentence-transformers
đģ Usage Examples
Basic Usage
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("upskyy/e5-base-korean")
sentences = [
'ėė´ëĨŧ ę°ė§ ėë§ę° í´ëŗė 깡ëë¤.',
'ë ėŦëė´ í´ëŗė 깡ëë¤.',
'í ë¨ėę° í´ëŗėė ę°ëĨŧ ė°ėą
ėí¨ë¤.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Advanced Usage
Without sentence - transformers
, you can use the model like this:
First, you pass your input through the transformer model, then you have to apply the right pooling - operation on - top of the contextualized word embeddings.
from transformers import AutoTokenizer, AutoModel
import torch
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ["ėë
íė¸ė?", "íęĩė´ ëŦ¸ėĨ ėë˛ ëŠė ėí ë˛í¸ ëǍë¸ė
ëë¤."]
tokenizer = AutoTokenizer.from_pretrained("upskyy/e5-base-korean")
model = AutoModel.from_pretrained("upskyy/e5-base-korean")
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
model_output = model(**encoded_input)
sentence_embeddings = mean_pooling(model_output, encoded_input["attention_mask"])
print("Sentence embeddings:")
print(sentence_embeddings)
đ Documentation
Model Details
Property |
Details |
Model Type |
Sentence Transformer |
Base model |
[intfloat/multilingual - e5 - base](https://huggingface.co/intfloat/multilingual - e5 - base) |
Maximum Sequence Length |
512 tokens |
Output Dimensionality |
768 tokens |
Similarity Function |
Cosine Similarity |
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Evaluation
Semantic Similarity
Metric |
Value |
pearson_cosine |
0.8594 |
spearman_cosine |
0.8573 |
pearson_manhattan |
0.8217 |
spearman_manhattan |
0.828 |
pearson_euclidean |
0.8209 |
spearman_euclidean |
0.8277 |
pearson_dot |
0.8188 |
spearman_dot |
0.8236 |
pearson_max |
0.8594 |
spearman_max |
0.8573 |
Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.0+cu121
- Accelerate: 0.30.1
- Datasets: 2.16.1
- Tokenizers: 0.19.1
đ§ Technical Details
This model is fine - tuned on korsts
and kornli
datasets based on the intfloat/multilingual - e5 - base
model. It uses a XLMRobertaModel
as the backbone and a Pooling
layer for post - processing. The pooling operation is set to mean pooling, which can effectively aggregate the information of contextualized word embeddings.
đ License
This project is licensed under the MIT license.
đ Citation
BibTeX
@article{wang2024multilingual,
title={Multilingual E5 Text Embeddings: A Technical Report},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2402.05672},
year={2024}
}
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}