Model Overview
Model Features
Model Capabilities
Use Cases
๐ SentenceTransformer based on intfloat/multilingual-e5-small
This is a sentence-transformers model fine-tuned from intfloat/multilingual-e5-small on datasets containing Korean query-passage pairs. It aims to enhance performance in Korean retrieval tasks. The model maps sentences and paragraphs to a 384-dimensional dense vector space and can be applied to semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
This model serves as a lightweight Korean retriever, designed for easy use and strong performance in practical retrieval tasks. It strikes a good balance between speed and accuracy, making it ideal for running demos or lightweight applications.
Surprisingly, this small-sized model outperforms the much larger 'intfloat/multilingual-e5-base' model (with over 2x parameters) on Korean benchmarks. This means you can achieve superior performance while using only half the computing resources.
For even better retrieval performance, we recommend combining it with a reranker. Suggested reranker models are:
- dragonkue/bge-reranker-v2-m3-ko
- BAAI/bge-reranker-v2-m3
๐ Quick Start
Prerequisites
First, install the Sentence Transformers library:
pip install -U sentence-transformers
Example Code
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("dragonkue/multilingual-e5-small-ko-v2")
# Run inference
sentences = [
'query: ๋ถํ๊ฐ์กฑ๋ฒ ๋ช ์ฐจ ๊ฐ์ ์์ ์ดํผํ๊ฒฐ ํ์ ํ 3๊ฐ์ ๋ด์ ๋ฑ๋ก์์๋ง ์ ํจํ๋ค๋ ์กฐํญ์ ํ์คํ ํ์๊น?',
'passage: 1990๋
์ ์ ์ ๋ ๋ถํ ๊ฐ์กฑ๋ฒ์ ์ง๊ธ๊น์ง 4์ฐจ๋ก ๊ฐ์ ๋์ด ํ์ฌ์ ์ด๋ฅด๊ณ ์๋ค. 1993๋
์ ์ด๋ฃจ์ด์ง ์ 1์ฐจ ๊ฐ์ ์ ์ฃผ๋ก ๊ท์ ์ ์ ํ์ฑ์ ๊ธฐํ๊ธฐ ์ํ์ฌ ๋ช๋ช ์กฐ๋ฌธ์ ์์ ํ ๊ฒ์ด๋ฉฐ, ์ค์ฒด์ ์ธ ๋ด์ฉ์ ๋ณด์ํ ๊ฒ์ ์์์ ์น์ธ๊ณผ ํฌ๊ธฐ๊ธฐ๊ฐ์ ์ค์ ํ ์ 52์กฐ ์ ๋๋ผ๊ณ ํ ์ ์๋ค. 2004๋
์ ์ด๋ฃจ์ด์ง ์ 2์ฐจ์ ๊ฐ์ ์์๋ ์ 20์กฐ์ 3ํญ์ ์ ์คํ์ฌ ์ฌํ์ ํ์ ๋ ์ดํผํ๊ฒฐ์ 3๊ฐ์ ๋ด์ ๋ฑ๋กํด์ผ ์ดํผ์ ํจ๋ ฅ์ด ๋ฐ์ํ๋ค๋ ๊ฒ์ ๋ช
ํํ๊ฒ ํ์๋ค. 2007๋
์ ์ด๋ฃจ์ด์ง ์ 3์ฐจ ๊ฐ์ ์์๋ ๋ถ๋ชจ์ ์๋
๊ด๊ณ ๋ํ ์ ๋ถ๋ฑ๋ก๊ธฐ๊ด์ ๋ฑ๋กํ ๋๋ถํฐ ๋ฒ์ ํจ๋ ฅ์ด ๋ฐ์ํ๋ค๋ ๊ฒ์ ์ ์ค(์ 25์กฐ์ 2ํญ)ํ์๋ค. ๋ํ ๋ฏธ์ฑ๋
์, ๋
ธ๋๋ฅ๋ ฅ ์๋ ์์ ๋ถ์๊ณผ ๊ด๋ จ(์ 37์กฐ์ 2ํญ)ํ์ฌ ๊ธฐ์กด์๋ โ๋ถ์๋ฅ๋ ฅ์ด ์๋ ๊ฐ์ ์ฑ์์ด ์์ ๊ฒฝ์ฐ์๋ ๋ฐ๋ก ์ฌ๋ ๋ถ๋ชจ๋ ์๋
, ์กฐ๋ถ๋ชจ๋ ์์๋
, ํ์ ์๋งค๊ฐ ๋ถ์ํ๋คโ๊ณ ๊ท์ ํ๊ณ ์์๋ ๊ฒ์ โ๋ถ์๋ฅ๋ ฅ์ด ์๋ ๊ฐ์ ์ฑ์์ด ์์ ๊ฒฝ์ฐ์๋ ๋ฐ๋ก ์ฌ๋ ๋ถ๋ชจ๋ ์๋
๊ฐ ๋ถ์ํ๋ฉฐ ๊ทธ๋ค์ด ์์ ๊ฒฝ์ฐ์๋ ์กฐ๋ถ๋ชจ๋ ์์๋
, ํ์ ์๋งค๊ฐ ๋ถ์ํ๋คโ๋ก ๊ฐ์ ํ์๋ค.',
'passage: ํ๊ฒฝ๋งํฌ ์ ๋, ์ธ์ฆ๊ธฐ์ค ๋ณ๊ฒฝ์ผ๋ก ๊ธฐ์
๋ถ๋ด ์ค์ธ๋ค\nํ๊ฒฝ๋งํฌ ์ ๋ ์๊ฐ\nโก ๊ฐ์\nโ ๋์ผ ์ฉ๋์ ๋ค๋ฅธ ์ ํ์ ๋นํด โ์ ํ์ ํ๊ฒฝ์ฑ*โ์ ๊ฐ์ ํ ์ ํ์ ๋ก๊ณ ์ ์ค๋ช
์ ํ์ํ ์ ์๋๋กํ๋ ์ธ์ฆ ์ ๋\nโป ์ ํ์ ํ๊ฒฝ์ฑ : ์ฌ๋ฃ์ ์ ํ์ ์ ์กฐโค์๋น ํ๊ธฐํ๋ ์ ๊ณผ์ ์์ ์ค์ผ๋ฌผ์ง์ด๋ ์จ์ค๊ฐ์ค ๋ฑ์ ๋ฐฐ์ถํ๋ ์ ๋ ๋ฐ ์์๊ณผ ์๋์ง๋ฅผ ์๋นํ๋ ์ ๋ ๋ฑ ํ๊ฒฝ์ ๋ฏธ์น๋ ์ํฅ๋ ฅ์ ์ ๋(ใํ๊ฒฝ๊ธฐ์ ๋ฐ ํ๊ฒฝ์ฐ์
์ง์๋ฒใ์ 2์กฐ์ 5ํธ)\nโก ๋ฒ์ ๊ทผ๊ฑฐ\nโ ใํ๊ฒฝ๊ธฐ์ ๋ฐ ํ๊ฒฝ์ฐ์
์ง์๋ฒใ์ 17์กฐ(ํ๊ฒฝํ์ง์ ์ธ์ฆ)\nโก ๊ด๋ จ ๊ตญ์ ํ์ค\nโ ISO 14024(์ 1์ ํ ํ๊ฒฝ๋ผ๋ฒจ๋ง)\nโก ์ ์ฉ๋์\nโ ์ฌ๋ฌด๊ธฐ๊ธฐ, ๊ฐ์ ์ ํ, ์ํ์ฉํ, ๊ฑด์ถ์์ฌ ๋ฑ 156๊ฐ ๋์์ ํ๊ตฐ\nโก ์ธ์ฆํํฉ\nโ 2,737๊ฐ ๊ธฐ์
์ 16,647๊ฐ ์ ํ(2015.12์๋ง ๊ธฐ์ค)',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
โจ Features
- Lightweight and High-Performance: A small-sized model that offers superior performance on Korean benchmarks compared to larger models, using fewer computing resources.
- Versatile Applications: Can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, etc.
- Combined with Reranker: Can be combined with a reranker for even higher retrieval performance.
๐ฆ Installation
pip install -U sentence-transformers
๐ป Usage Examples
Basic Usage
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("dragonkue/multilingual-e5-small-ko-v2")
# Run inference
sentences = [
'query: ๋ถํ๊ฐ์กฑ๋ฒ ๋ช ์ฐจ ๊ฐ์ ์์ ์ดํผํ๊ฒฐ ํ์ ํ 3๊ฐ์ ๋ด์ ๋ฑ๋ก์์๋ง ์ ํจํ๋ค๋ ์กฐํญ์ ํ์คํ ํ์๊น?',
'passage: 1990๋
์ ์ ์ ๋ ๋ถํ ๊ฐ์กฑ๋ฒ์ ์ง๊ธ๊น์ง 4์ฐจ๋ก ๊ฐ์ ๋์ด ํ์ฌ์ ์ด๋ฅด๊ณ ์๋ค. 1993๋
์ ์ด๋ฃจ์ด์ง ์ 1์ฐจ ๊ฐ์ ์ ์ฃผ๋ก ๊ท์ ์ ์ ํ์ฑ์ ๊ธฐํ๊ธฐ ์ํ์ฌ ๋ช๋ช ์กฐ๋ฌธ์ ์์ ํ ๊ฒ์ด๋ฉฐ, ์ค์ฒด์ ์ธ ๋ด์ฉ์ ๋ณด์ํ ๊ฒ์ ์์์ ์น์ธ๊ณผ ํฌ๊ธฐ๊ธฐ๊ฐ์ ์ค์ ํ ์ 52์กฐ ์ ๋๋ผ๊ณ ํ ์ ์๋ค. 2004๋
์ ์ด๋ฃจ์ด์ง ์ 2์ฐจ์ ๊ฐ์ ์์๋ ์ 20์กฐ์ 3ํญ์ ์ ์คํ์ฌ ์ฌํ์ ํ์ ๋ ์ดํผํ๊ฒฐ์ 3๊ฐ์ ๋ด์ ๋ฑ๋กํด์ผ ์ดํผ์ ํจ๋ ฅ์ด ๋ฐ์ํ๋ค๋ ๊ฒ์ ๋ช
ํํ๊ฒ ํ์๋ค. 2007๋
์ ์ด๋ฃจ์ด์ง ์ 3์ฐจ ๊ฐ์ ์์๋ ๋ถ๋ชจ์ ์๋
๊ด๊ณ ๋ํ ์ ๋ถ๋ฑ๋ก๊ธฐ๊ด์ ๋ฑ๋กํ ๋๋ถํฐ ๋ฒ์ ํจ๋ ฅ์ด ๋ฐ์ํ๋ค๋ ๊ฒ์ ์ ์ค(์ 25์กฐ์ 2ํญ)ํ์๋ค. ๋ํ ๋ฏธ์ฑ๋
์, ๋
ธ๋๋ฅ๋ ฅ ์๋ ์์ ๋ถ์๊ณผ ๊ด๋ จ(์ 37์กฐ์ 2ํญ)ํ์ฌ ๊ธฐ์กด์๋ โ๋ถ์๋ฅ๋ ฅ์ด ์๋ ๊ฐ์ ์ฑ์์ด ์์ ๊ฒฝ์ฐ์๋ ๋ฐ๋ก ์ฌ๋ ๋ถ๋ชจ๋ ์๋
, ์กฐ๋ถ๋ชจ๋ ์์๋
, ํ์ ์๋งค๊ฐ ๋ถ์ํ๋คโ๊ณ ๊ท์ ํ๊ณ ์์๋ ๊ฒ์ โ๋ถ์๋ฅ๋ ฅ์ด ์๋ ๊ฐ์ ์ฑ์์ด ์์ ๊ฒฝ์ฐ์๋ ๋ฐ๋ก ์ฌ๋ ๋ถ๋ชจ๋ ์๋
๊ฐ ๋ถ์ํ๋ฉฐ ๊ทธ๋ค์ด ์์ ๊ฒฝ์ฐ์๋ ์กฐ๋ถ๋ชจ๋ ์์๋
, ํ์ ์๋งค๊ฐ ๋ถ์ํ๋คโ๋ก ๊ฐ์ ํ์๋ค.',
'passage: ํ๊ฒฝ๋งํฌ ์ ๋, ์ธ์ฆ๊ธฐ์ค ๋ณ๊ฒฝ์ผ๋ก ๊ธฐ์
๋ถ๋ด ์ค์ธ๋ค\nํ๊ฒฝ๋งํฌ ์ ๋ ์๊ฐ\nโก ๊ฐ์\nโ ๋์ผ ์ฉ๋์ ๋ค๋ฅธ ์ ํ์ ๋นํด โ์ ํ์ ํ๊ฒฝ์ฑ*โ์ ๊ฐ์ ํ ์ ํ์ ๋ก๊ณ ์ ์ค๋ช
์ ํ์ํ ์ ์๋๋กํ๋ ์ธ์ฆ ์ ๋\nโป ์ ํ์ ํ๊ฒฝ์ฑ : ์ฌ๋ฃ์ ์ ํ์ ์ ์กฐโค์๋น ํ๊ธฐํ๋ ์ ๊ณผ์ ์์ ์ค์ผ๋ฌผ์ง์ด๋ ์จ์ค๊ฐ์ค ๋ฑ์ ๋ฐฐ์ถํ๋ ์ ๋ ๋ฐ ์์๊ณผ ์๋์ง๋ฅผ ์๋นํ๋ ์ ๋ ๋ฑ ํ๊ฒฝ์ ๋ฏธ์น๋ ์ํฅ๋ ฅ์ ์ ๋(ใํ๊ฒฝ๊ธฐ์ ๋ฐ ํ๊ฒฝ์ฐ์
์ง์๋ฒใ์ 2์กฐ์ 5ํธ)\nโก ๋ฒ์ ๊ทผ๊ฑฐ\nโ ใํ๊ฒฝ๊ธฐ์ ๋ฐ ํ๊ฒฝ์ฐ์
์ง์๋ฒใ์ 17์กฐ(ํ๊ฒฝํ์ง์ ์ธ์ฆ)\nโก ๊ด๋ จ ๊ตญ์ ํ์ค\nโ ISO 14024(์ 1์ ํ ํ๊ฒฝ๋ผ๋ฒจ๋ง)\nโก ์ ์ฉ๋์\nโ ์ฌ๋ฌด๊ธฐ๊ธฐ, ๊ฐ์ ์ ํ, ์ํ์ฉํ, ๊ฑด์ถ์์ฌ ๋ฑ 156๊ฐ ๋์์ ํ๊ตฐ\nโก ์ธ์ฆํํฉ\nโ 2,737๊ฐ ๊ธฐ์
์ 16,647๊ฐ ์ ํ(2015.12์๋ง ๊ธฐ์ค)',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Advanced Usage
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ", even for non-English texts.
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ["query: ๋ถํ๊ฐ์กฑ๋ฒ ๋ช ์ฐจ ๊ฐ์ ์์ ์ดํผํ๊ฒฐ ํ์ ํ 3๊ฐ์ ๋ด์ ๋ฑ๋ก์์๋ง ์ ํจํ๋ค๋ ์กฐํญ์ ํ์คํ ํ์๊น?",
"passage: 1990๋
์ ์ ์ ๋ ๋ถํ ๊ฐ์กฑ๋ฒ์ ์ง๊ธ๊น์ง 4์ฐจ๋ก ๊ฐ์ ๋์ด ํ์ฌ์ ์ด๋ฅด๊ณ ์๋ค. 1993๋
์ ์ด๋ฃจ์ด์ง ์ 1์ฐจ ๊ฐ์ ์ ์ฃผ๋ก ๊ท์ ์ ์ ํ์ฑ์ ๊ธฐํ๊ธฐ ์ํ์ฌ ๋ช๋ช ์กฐ๋ฌธ์ ์์ ํ ๊ฒ์ด๋ฉฐ, ์ค์ฒด์ ์ธ ๋ด์ฉ์ ๋ณด์ํ ๊ฒ์ ์์์ ์น์ธ๊ณผ ํฌ๊ธฐ๊ธฐ๊ฐ์ ์ค์ ํ ์ 52์กฐ ์ ๋๋ผ๊ณ ํ ์ ์๋ค. 2004๋
์ ์ด๋ฃจ์ด์ง ์ 2์ฐจ์ ๊ฐ์ ์์๋ ์ 20์กฐ์ 3ํญ์ ์ ์คํ์ฌ ์ฌํ์ ํ์ ๋ ์ดํผํ๊ฒฐ์ 3๊ฐ์ ๋ด์ ๋ฑ๋กํด์ผ ์ดํผ์ ํจ๋ ฅ์ด ๋ฐ์ํ๋ค๋ ๊ฒ์ ๋ช
ํํ๊ฒ ํ์๋ค. 2007๋
์ ์ด๋ฃจ์ด์ง ์ 3์ฐจ ๊ฐ์ ์์๋ ๋ถ๋ชจ์ ์๋
๊ด๊ณ ๋ํ ์ ๋ถ๋ฑ๋ก๊ธฐ๊ด์ ๋ฑ๋กํ ๋๋ถํฐ ๋ฒ์ ํจ๋ ฅ์ด ๋ฐ์ํ๋ค๋ ๊ฒ์ ์ ์ค(์ 25์กฐ์ 2ํญ)ํ์๋ค. ๋ํ ๋ฏธ์ฑ๋
์, ๋
ธ๋๋ฅ๋ ฅ ์๋ ์์ ๋ถ์๊ณผ ๊ด๋ จ(์ 37์กฐ์ 2ํญ)ํ์ฌ ๊ธฐ์กด์๋ โ๋ถ์๋ฅ๋ ฅ์ด ์๋ ๊ฐ์ ์ฑ์์ด ์์ ๊ฒฝ์ฐ์๋ ๋ฐ๋ก ์ฌ๋ ๋ถ๋ชจ๋ ์๋
, ์กฐ๋ถ๋ชจ๋ ์์๋
, ํ์ ์๋งค๊ฐ ๋ถ์ํ๋คโ๊ณ ๊ท์ ํ๊ณ ์์๋ ๊ฒ์ โ๋ถ์๋ฅ๋ ฅ์ด ์๋ ๊ฐ์ ์ฑ์์ด ์์ ๊ฒฝ์ฐ์๋ ๋ฐ๋ก ์ฌ๋ ๋ถ๋ชจ๋ ์๋
๊ฐ ๋ถ์ํ๋ฉฐ ๊ทธ๋ค์ด ์์ ๊ฒฝ์ฐ์๋ ์กฐ๋ถ๋ชจ๋ ์์๋
, ํ์ ์๋งค๊ฐ ๋ถ์ํ๋คโ๋ก ๊ฐ์ ํ์๋ค.",
"passage: ํ๊ฒฝ๋งํฌ ์ ๋, ์ธ์ฆ๊ธฐ์ค ๋ณ๊ฒฝ์ผ๋ก ๊ธฐ์
๋ถ๋ด ์ค์ธ๋ค\nํ๊ฒฝ๋งํฌ ์ ๋ ์๊ฐ\nโก ๊ฐ์\nโ ๋์ผ ์ฉ๋์ ๋ค๋ฅธ ์ ํ์ ๋นํด โ์ ํ์ ํ๊ฒฝ์ฑ*โ์ ๊ฐ์ ํ ์ ํ์ ๋ก๊ณ ์ ์ค๋ช
์ ํ์ํ ์ ์๋๋กํ๋ ์ธ์ฆ ์ ๋\nโป ์ ํ์ ํ๊ฒฝ์ฑ : ์ฌ๋ฃ์ ์ ํ์ ์ ์กฐโค์๋น ํ๊ธฐํ๋ ์ ๊ณผ์ ์์ ์ค์ผ๋ฌผ์ง์ด๋ ์จ์ค๊ฐ์ค ๋ฑ์ ๋ฐฐ์ถํ๋ ์ ๋ ๋ฐ ์์๊ณผ ์๋์ง๋ฅผ ์๋นํ๋ ์ ๋ ๋ฑ ํ๊ฒฝ์ ๋ฏธ์น๋ ์ํฅ๋ ฅ์ ์ ๋(ใํ๊ฒฝ๊ธฐ์ ๋ฐ ํ๊ฒฝ์ฐ์
์ง์๋ฒใ์ 2์กฐ์ 5ํธ)\nโก ๋ฒ์ ๊ทผ๊ฑฐ\nโ ใํ๊ฒฝ๊ธฐ์ ๋ฐ ํ๊ฒฝ์ฐ์
์ง์๋ฒใ์ 17์กฐ(ํ๊ฒฝํ์ง์ ์ธ์ฆ)\nโก ๊ด๋ จ ๊ตญ์ ํ์ค\nโ ISO 14024(์ 1์ ํ ํ๊ฒฝ๋ผ๋ฒจ๋ง)\nโก ์ ์ฉ๋์\nโ ์ฌ๋ฌด๊ธฐ๊ธฐ, ๊ฐ์ ์ ํ, ์ํ์ฉํ, ๊ฑด์ถ์์ฌ ๋ฑ 156๊ฐ ๋์์ ํ๊ตฐ\nโก ์ธ์ฆํํฉ\nโ 2,737๊ฐ ๊ธฐ์
์ 16,647๊ฐ ์ ํ(2015.12์๋ง ๊ธฐ์ค)"]
tokenizer = AutoTokenizer.from_pretrained('dragonkue/multilingual-e5-small-ko-v2')
model = AutoModel.from_pretrained('dragonkue/multilingual-e5-small-ko-v2')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T)
print(scores.tolist())
๐ Documentation
Model Details
Model Description
Property | Details |
---|---|
Model Type | Sentence Transformer |
Maximum Sequence Length | 512 tokens |
Output Dimensionality | 384 dimensions |
Similarity Function | Cosine Similarity |
Model Soup
This model is created using the Model Soup technique by merging the following two models with weighted averaging:
dragonkue/multilingual-e5-small-ko
(Korean-specialized, 60% weight)intfloat/multilingual-e5-small
(Base multilingual model, 40% weight)
The 6:4 weight ratio was determined to be optimal through experimental evaluation.
Related Resources
- Implementation Code: FlagEmbedding/LM_Cocktail
- Research Paper: LM-Cocktail: Resilient Tuning of Language Models via Model Merging
- Technical Blog: JinaAI's "Model Soups: Recipe for Embeddings"
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
๐ง Technical Details
Evaluation
- This evaluation references the KURE GitHub repository. (https://github.com/nlpai-lab/KURE)
- We conducted an evaluation on all Korean Retrieval Benchmarks registered in MTEB.
Korean Retrieval Benchmark
- Ko-StrategyQA: A Korean ODQA multi-hop retrieval dataset, translated from StrategyQA.
- AutoRAGRetrieval: A Korean document retrieval dataset constructed by parsing PDFs from five domains: finance, public, medical, legal, and commerce.
- MIRACLRetrieval: A Korean document retrieval dataset based on Wikipedia.
- PublicHealthQA: A retrieval dataset focused on medical and public health domains in Korean.
- BelebeleRetrieval: A Korean document retrieval dataset based on FLORES-200.
- MrTidyRetrieval: A Wikipedia-based Korean document retrieval dataset.
- XPQARetrieval: A cross-domain Korean document retrieval dataset.
Metrics
- Standard metric : NDCG@10
Model | Size(M) | Average | XPQARetrieval | PublicHealthQA | MIRACLRetrieval | Ko-StrategyQA | BelebeleRetrieval | AutoRAGRetrieval | MrTidyRetrieval |
---|---|---|---|---|---|---|---|---|---|
BAAI/bge-m3 | 560 | 0.724169 | 0.36075 | 0.80412 | 0.70146 | 0.79405 | 0.93164 | 0.83008 | 0.64708 |
Snowflake/snowflake-arctic-embed-l-v2.0 | 560 | 0.724104 | 0.43018 | 0.81679 | 0.66077 | 0.80455 | 0.9271 | 0.83863 | 0.59071 |
intfloat/multilingual-e5-large | 560 | 0.721607 | 0.3571 | 0.82534 | 0.66486 | 0.80348 | 0.94499 | 0.81337 | 0.64211 |
dragonkue/multilingual-e5-small-ko-v2 | 118 | 0.692511 | 0.34739 | 0.77234 | 0.63262 | 0.76849 | 0.92962 | 0.85623 | 0.54089 |
intfloat/multilingual-e5-base | 278 | 0.689429 | 0.3607 | 0.77203 | 0.6227 | 0.76355 | 0.92868 | 0.79752 | 0.58082 |
dragonkue/multilingual-e5-small-ko | 118 | 0.688819 | 0.34871 | 0.79729 | 0.61113 | 0.76173 | 0.9297 | 0.86184 | 0.51133 |
exp-models/dragonkue-KoEn-E5-Tiny | 37 | 0.687496 | 0.34735 | 0.7925 | 0.6143 | 0.75978 | 0.93018 | 0.86503 | 0.50333 |
intfloat/multilingual-e5-small | 118 | 0.670906 | 0.33003 | 0.73668 | 0.61238 | 0.75157 | 0.90531 | 0.80068 | 0.55969 |
ibm-granite/granite-embedding-278m-multilingual | 278 | 0.616466 | 0.23058 | 0.77668 | 0.59216 | 0.71762 | 0.83231 | 0.70226 | 0.46365 |
ibm-granite/granite-embedding-107m-multilingual | 107 | 0.599759 | 0.23058 | 0.73209 | 0.58413 | 0.70531 | 0.82063 | 0.68243 | 0.44314 |
sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 | 118 | 0.409766 | 0.21345 | 0.67409 | 0.25676 | 0.45903 | 0.71491 | 0.42296 | 0.12716 |
Training Details
Training Datasets
This model was fine-tuned on the same dataset used in dragonkue/snowflake-arctic-embed-l-v2.0-ko, which consists of Korean query-passage pairs. The training objective was to improve retrieval performance specifically for Korean-language tasks.
Training Methods
Following the training approach used in dragonkue/snowflake-arctic-embed-l-v2.0-ko, this model constructs in-batch negatives based on clustered passages. In addition, we introduce GISTEmbedLoss with a configurable margin.
๐ Margin-based Training Results
- Using the standard MNR (Multiple Negatives Ranking) loss alone resulted in decreased performance.
- The original GISTEmbedLoss (without margin) yielded modest improvements of around +0.8 NDCG@10.
- Applying a margin led to performance gains of up to +1.5 NDCG@10.
- This indicates that simply tuning the margin value can lead to up to 2x improvement, showing strong sensitivity and effectiveness of margin scaling.
This margin-based approach extends the idea proposed in the NV-Retriever paper, which originally filtered false negatives during hard negative sampling. We adapt this to in-batch negatives, treating false negatives as dynamic samples guided by margin-based filtering.
The sentence-transformers library now supports GISTEmbedLoss with margin configuration, making it easy to integrate into any training pipeline. You can install the latest version with:
pip install -U sentence-transformers
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 20000per_device_eval_batch_size
: 4096learning_rate
: 0.00025num_train_epochs
: 3warmup_ratio
: 0.05fp16
: Truedataloader_drop_last
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 20000per_device_eval_batch_size
: 4096per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 0.00025weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 2max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.05warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Truedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Framework Versions
- Python: 3.11.10
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
๐ License
This model is licensed under the Apache 2.0 license.
FAQ
โ ๏ธ Important Note
Each input text should start with "query: " or "passage: ", even for non-English texts. Otherwise, you will see a performance degradation.
๐ก Usage Tip
- For asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval, use "query: " and "passage: " correspondingly.
- For symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval, use the "query: " prefix.
- If you want to use embeddings as features, such as linear probing classification, clustering, use the "query: " prefix.
1. Do I need to add the prefix "query: " and "passage: " to input texts? Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
2. Why does the cosine similarity scores distribute around 0.7 to 1.0? This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue.
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
Base Model
@article{wang2024multilingual,
title={Multilingual E5 Text Embeddings: A Technical Report},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2402.05672},
year={2024}
}
NV-Retriever: Improving text embedding models with effective hard-negative mining
@article{moreira2024nvretriever,
title = {NV-Retriever: Improving text embedding models with effective hard-negative mining},
author = {Moreira, Gabriel de Souza P. and Osmulski, Radek and Xu, Mengyao and Ak, Ronay and Schifferer, Benedikt and Oldridge, Even},
journal = {arXiv preprint arXiv:2407.15831},
year = {2024},
url = {https://arxiv.org/abs/2407.15831},
doi = {10.48550/arXiv.2407.15831}
}
LM-Cocktail: Resilient Tuning of Language Models via Model Merging
@article{xiao2023lmcocktail,
title = {LM-Cocktail: Resilient Tuning of Language Models via Model Merging},
author = {Xiao, Jin and Zhang, Jiawei and Zhang, Hao and Zheng, Chujie and Yang, Linjun and Wei, Furu},
journal = {arXiv preprint arXiv:2311.13534},
year = {2023}
}





