Danbert Small Cased
DanBERT is a Danish pre-trained model based on the BERT-Base architecture, trained on over 2 million Danish sentences.
Large Language Model Supports Multiple LanguagesOpen Source License:Apache-2.0#Danish pre-trained#BERT architecture#Named Entity Recognition
Downloads 18
Release Time : 3/2/2022
Model Overview
DanBERT is a pre-trained language model optimized for Danish, primarily used for natural language processing tasks such as named entity recognition and text classification.
Model Features
Optimized for Danish
Specifically pre-trained for Danish, suitable for Danish natural language processing tasks.
Based on BERT Architecture
Utilizes the BERT-Base architecture with strong language understanding and representation capabilities.
Model Capabilities
Danish text understanding
Named Entity Recognition
Text Classification
Use Cases
Natural Language Processing
Danish Text Classification
Classify Danish texts, such as sentiment analysis or topic classification.
Danish Named Entity Recognition
Identify named entities in Danish texts, such as person names, locations, and organization names.
Featured Recommended AI Models
Š 2025AIbase