đ upskyy/e5-large-korean
This model is a fine - tuned model on korsts and kornli, based on intfloat/multilingual-e5-large. It maps sentences and paragraphs into a 1024 - dimensional dense vector space, and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
⨠Features
- Multilingual Support: Supports a wide range of languages including Korean, enabling cross - language semantic analysis.
- High - Dimensional Embeddings: Outputs 1024 - dimensional embeddings for rich semantic representation.
- Versatile Applications: Applicable in various NLP tasks such as similarity search, classification, and clustering.
đĻ Installation
First, you need to install the Sentence Transformers library:
pip install -U sentence-transformers
đģ Usage Examples
Basic Usage
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("upskyy/e5-large-korean")
sentences = [
'ėė´ëĨŧ ę°ė§ ėë§ę° í´ëŗė 깡ëë¤.',
'ë ėŦëė´ í´ëŗė 깡ëë¤.',
'í ë¨ėę° í´ëŗėė ę°ëĨŧ ė°ėą
ėí¨ë¤.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Advanced Usage
Without sentence - transformers, you can use the model like this:
First, you pass your input through the transformer model, then you have to apply the right pooling - operation on - top of the contextualized word embeddings.
from transformers import AutoTokenizer, AutoModel
import torch
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ["ėë
íė¸ė?", "íęĩė´ ëŦ¸ėĨ ėë˛ ëŠė ėí ë˛í¸ ëǍë¸ė
ëë¤."]
tokenizer = AutoTokenizer.from_pretrained("upskyy/e5-large-korean")
model = AutoModel.from_pretrained("upskyy/e5-large-korean")
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
model_output = model(**encoded_input)
sentence_embeddings = mean_pooling(model_output, encoded_input["attention_mask"])
print("Sentence embeddings:")
print(sentence_embeddings)
đ Documentation
Model Details
Model Description
Property |
Details |
Model Type |
Sentence Transformer |
Base model |
intfloat/multilingual-e5-large |
Maximum Sequence Length |
512 tokens |
Output Dimensionality |
1024 tokens |
Similarity Function |
Cosine Similarity |
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Evaluation
Metrics
Semantic Similarity
Metric |
Value |
pearson_cosine |
0.871 |
spearman_cosine |
0.8699 |
pearson_manhattan |
0.8599 |
spearman_manhattan |
0.8683 |
pearson_euclidean |
0.8596 |
spearman_euclidean |
0.868 |
pearson_dot |
0.8685 |
spearman_dot |
0.8668 |
pearson_max |
0.871 |
spearman_max |
0.8699 |
Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.0+cu121
- Accelerate: 0.30.1
- Datasets: 2.16.1
- Tokenizers: 0.19.1
đ§ Technical Details
The model is fine - tuned on korsts and kornli datasets, which helps it better understand Korean semantics. The pooling operation in the model architecture is crucial for aggregating token - level embeddings into sentence - level embeddings.
đ License
This project is licensed under the MIT license.
đ Citation
BibTeX
@article{wang2024multilingual,
title={Multilingual E5 Text Embeddings: A Technical Report},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2402.05672},
year={2024}
}
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}