C

Czert B Base Cased

Developed by UWB-AIR
CZERT is a language representation model specifically trained for Czech, outperforming multilingual BERT models on various Czech NLP tasks
Downloads 560
Release Time : 3/2/2022

Model Overview

CZERT is a Czech pre-trained language model based on the BERT architecture, including two variants: the base version (CZERT-B) and the ALBERT version (CZERT-A). It excels in tasks such as sentiment analysis, semantic similarity, and named entity recognition

Model Features

Czech Language Optimization
Specifically trained for Czech, outperforming multilingual BERT on Czech language tasks
Multi-task Support
Supports various NLP tasks from token-level to document-level
Superior Performance
Outperforms models like mBERT and SlavicBERT in multiple Czech NLP benchmarks

Model Capabilities

Text classification
Semantic similarity calculation
Named entity recognition
Morphological tagging
Semantic role labeling
Sentiment analysis

Use Cases

Sentiment Analysis
Social Media Comment Sentiment Classification
Analyze sentiment tendencies in Facebook or CSFD (Czech Film Database) comments
Achieved 84.79% F1 score on the CSFD dataset
Semantic Understanding
News Text Similarity Calculation
Evaluate semantic similarity of Czech News Agency (CNA) texts
Pearson correlation coefficient reached 84.345
Information Extraction
Named Entity Recognition
Identify Czech person names, locations, and other entities from text
Achieved 86.274% F1 score on the CNEC dataset
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase