D

Dbert

Developed by baikalai
A Korean pre-trained language model based on the BERT architecture, suitable for Korean text processing tasks.
Downloads 17
Release Time : 3/2/2022

Model Overview

The deeqBERT Basic Version is a Korean pre-trained language model based on the BERT architecture, primarily used for Korean text understanding and generation tasks. It supports training on datasets such as Korean Wikipedia and news articles, making it suitable for various natural language processing applications.

Model Features

Korean Optimization
Specially optimized for Korean text, supporting Korean Wikipedia and news datasets.
BERT Tokenization
Uses the BERT tokenizer with a vocabulary of 35k entries, suitable for Korean text processing.
Pre-trained Model
Pre-trained on large-scale Korean corpora, equipped with robust language understanding capabilities.

Model Capabilities

Text classification
Named entity recognition
Question-answering systems
Text generation
Semantic similarity calculation

Use Cases

Natural Language Processing
Korean News Classification
Classify Korean news articles into categories such as politics, economy, and sports.
High-accuracy classification performance
Korean Named Entity Recognition
Identify entities such as person names, locations, and organizations in Korean text.
Precise entity recognition capability
Information Retrieval
Korean Question-Answering System
Build an automated question-answering system based on Korean to respond to user queries.
Efficient question-answer matching
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase