Q

Qra 1b

Developed by OPI-PG
Qra is a series of Polish-optimized large language models jointly developed by the Polish National Information Processing Institute and Gdańsk University of Technology, initialized based on TinyLlama-1.1B and trained on 90 billion Polish tokens
Downloads 246
Release Time : 2/26/2024

Model Overview

A foundational language model optimized for Polish, requiring fine-tuning for dialogue or instruction tasks

Model Features

Polish language optimization
Trained on 90 billion carefully selected Polish tokens, specifically optimized for Polish text processing
Efficient training techniques
Utilizes modern optimization techniques such as Flash Attention 2, mixed-precision training, and FSDP parallelism
Rigorous data cleaning
Ensures training data quality through a multi-stage filtering process, including language classification, topic division, and deduplication

Model Capabilities

Polish text generation
Long text processing (4096 token context)
Language modeling

Use Cases

Text processing
Polish content generation
Generate text content that conforms to Polish language conventions
Language model fine-tuning foundation
Serve as a base model for downstream tasks (e.g., dialogue systems)
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase