C

Czert B Base Cased Long Zero Shot

Developed by UWB-AIR
Czert is a Czech language representation model based on BERT, specifically optimized for Czech and supports various downstream tasks.
Downloads 18
Release Time : 3/2/2022

Model Overview

Czert is a Czech pre-trained language model based on the BERT architecture, supporting various natural language processing tasks such as sentiment classification, semantic text similarity, and named entity recognition.

Model Features

Czech language optimization
A language model specifically trained for Czech, outperforming general multilingual models on Czech language tasks.
Long document support
Extends the original model's context length through repeated positional embeddings, suitable for processing long documents.
Multi-task adaptation
Provides multiple pre-trained and fine-tuned versions to meet the needs of different downstream tasks.

Model Capabilities

Text classification
Semantic similarity calculation
Named entity recognition
Morphological analysis
Semantic role labeling

Use Cases

Sentiment analysis
Facebook comment sentiment analysis
Analyzing the sentiment tendencies of Czech Facebook comments
F1 score 76.55±0.14
CSFD movie review sentiment analysis
Analyzing the sentiment of reviews from the Czech movie database CSFD
F1 score 84.79±0.26
Semantic understanding
News semantic similarity
Calculating the semantic similarity of news texts from the Czech Press Agency
Score 84.345±0.028
Information extraction
Named entity recognition
Identifying Czech names, locations, and other entities from text
BSNLP 2019 task F1 score 86.729±0.344
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase