F

FRED T5 1.7B

Developed by ai-forever
Russian pre-trained language model based on T5 architecture, employing a UL2-like mixed training strategy with 7 denoising tasks, 1.7 billion parameters
Downloads 1,671
Release Time : 1/20/2023

Model Overview

Large-scale pre-trained Transformer model for Russian, supporting various text generation and comprehension tasks

Model Features

Multi-task denoising training
Employs a UL2-like mixed training strategy with 7 denoising tasks to enhance model robustness
Large-scale Russian pre-training
Trained on 300GB Russian corpus, using the same dataset as ruT5 model
Prefix task tokens
Supports various prefix tokens like <LM>, <SC1>-<SC6> for different generation tasks

Model Capabilities

Russian text generation
Text denoising
Text completion
Text rewriting

Use Cases

Text generation
Story continuation
Generates coherent story content based on given beginnings
Successfully continued the background story of General Kutuzov in the example
Text completion
Missing text restoration
Completes masked text fragments based on context
The model correctly predicted that '<extra_id_0>' should be filled with 'combat experience' or 'path to generalship'
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase