K

Kosimcse Bert

Developed by BM-K
Korean sentence embedding model optimized based on BERT architecture for calculating sentence semantic similarity
Downloads 444
Release Time : 5/23/2022

Model Overview

This model optimizes sentence representations through contrastive learning, efficiently calculating semantic similarity between Korean sentences, suitable for tasks like information retrieval and text matching

Model Features

High-performance Semantic Matching
Achieves an average score of 83.37 on Korean STS tasks, outperforming similar baseline models
Multi-dimensional Similarity Calculation
Supports various similarity measures such as cosine similarity, Euclidean distance, and Manhattan distance
Ready-to-use Pre-trained Model
Provides an out-of-the-box pre-trained model supporting fast inference

Model Capabilities

Sentence Vector Generation
Semantic Similarity Calculation
Text Matching
Information Retrieval

Use Cases

Text Matching
Q&A Systems
Matching user questions with similar questions in the knowledge base
Improves Q&A accuracy
Document Deduplication
Identifying semantically similar documents
Effectively reduces duplicate content
Information Retrieval
Semantic Search
Search enhancement based on semantics rather than keyword matching
Improves search result relevance
Featured Recommended AI Models
ยฉ 2025AIbase