K

Kominilm

Developed by BM-K
KoMiniLM is a lightweight Korean language model designed to address latency and capacity limitations of large language models in practical applications.
Downloads 244
Release Time : 5/23/2022

Model Overview

KoMiniLM is a lightweight Korean language model that extracts knowledge from the teacher model KLUE-BERT through distillation techniques, suitable for various Korean natural language processing tasks.

Model Features

Lightweight Design
Small model parameter size (23M/68M), suitable for deployment and use in resource-limited environments.
Knowledge Distillation
Enhances model performance by distilling knowledge from the KLUE-BERT teacher model through self-attention distribution and self-attention value relationships.
Multi-Task Support
Performs excellently on various Korean NLP tasks, including sentiment analysis, named entity recognition, question answering, etc.

Model Capabilities

Text Classification
Named Entity Recognition
Question Answering System
Text Similarity Calculation
Sentiment Analysis

Use Cases

Sentiment Analysis
Movie Review Sentiment Analysis
Performs sentiment analysis on movie reviews using the NSMC dataset.
Accuracy 89.67±0.03 (23M model)
Named Entity Recognition
Naver NER Task
Tested on the NER task of the Naver NLP Challenge 2018.
F1 score 84.79±0.09 (23M model)
Question Answering System
KorQuAD Question Answering
Tested on the Korean question answering dataset KorQuAD.
EM/F1 score 82.11±0.42 / 91.21±0.29 (23M model)
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase