Soongsilbert Base Beep
KoELECTRA is a Korean pre-trained language model based on the ELECTRA architecture, optimized for Korean natural language processing tasks.
Downloads 23
Release Time : 3/2/2022
Model Overview
KoELECTRA is a pre-trained language model specifically designed for Korean, based on the ELECTRA architecture, suitable for various Korean NLP tasks such as text classification, named entity recognition, question answering systems, etc.
Model Features
High-performance Korean Processing
Excels in various Korean NLP tasks, achieving 90.63% accuracy specifically in the NSMC text classification task.
Advantages of ELECTRA Architecture
Adopts the ELECTRA architecture, offering higher training efficiency and better performance compared to traditional BERT models.
Multi-task Support
Supports various Korean NLP tasks, including text classification, named entity recognition, and question answering systems.
Model Capabilities
Korean text classification
Korean named entity recognition
Korean question answering
Korean natural language inference
Korean semantic text similarity calculation
Korean question matching
Korean hate speech detection
Use Cases
Text Analysis
Movie Review Sentiment Analysis
Sentiment classification of movie reviews using the NSMC dataset
Achieved 90.63% accuracy
Information Extraction
Named Entity Recognition
Identify named entities from Korean text
Achieved 88.11% F1 score
Question Answering
Korean Question Answering
Answering Korean questions
Exact match 84.83%, F1 score 93.45%
Featured Recommended AI Models