Kosimcse Roberta Multitask
K
Kosimcse Roberta Multitask
Developed by BM-K
A Korean semantic similarity calculation model optimized based on the RoBERTa architecture, achieving high-performance sentence embeddings through multi-task learning
Downloads 37.37k
Release Time : 6/1/2022
Model Overview
This model is specifically designed for Korean text, capable of encoding sentences into high-dimensional vector space embeddings for calculating semantic similarity between sentences. It supports multiple similarity calculation methods and excels in Korean semantic textual similarity tasks.
Model Features
Multi-task Learning Optimization
Enhances the model's understanding of Korean sentence semantics through multi-task learning strategies
High-performance Similarity Calculation
Achieves an average score of 85.77 in Korean semantic similarity benchmark tests, outperforming similar models
Support for Multiple Similarity Metrics
Supports various similarity calculation methods such as cosine similarity, Euclidean distance, and Manhattan distance
Model Capabilities
Korean Sentence Embedding
Semantic Similarity Calculation
Text Representation Learning
Use Cases
Information Retrieval
Similar Document Retrieval
Finds semantically similar documents through sentence embeddings
Improves retrieval accuracy and recall rate
Intelligent Customer Service
Question Matching
Matches user questions with similar questions in the knowledge base
Enhances the accuracy of automated Q&A systems
Featured Recommended AI Models
ยฉ 2025AIbase