F

FRED T5 Large

Developed by ai-forever
A Russian pre-trained language model based on the T5 architecture, employing a mixed training strategy with 7 denoisers similar to UL2, supporting various text generation tasks.
Downloads 998
Release Time : 2/28/2023

Model Overview

A pre-trained Transformer language model for Russian, primarily used for text generation and denoising tasks, supporting multiple prefix tokens to control generated content.

Model Features

Multi-task denoising training
Employs a mixed training strategy with 7 denoisers similar to UL2, enhancing the model's ability to handle noisy text.
Prefix token control
Supports multiple prefix tokens (e.g., <LM>, <SC1>-<SC6>) to control generated content and task types.
Large-scale Russian training
Trained on a 300GB Russian corpus, using the same dataset as the ruT5 model.

Model Capabilities

Russian text generation
Text denoising
Prefix-controlled generation
Story continuation
Text completion

Use Cases

Text generation
Story continuation
Uses the <LM> prefix for open-ended text generation.
The model can coherently continue a story based on a given beginning.
Text completion
Uses the <SC1> prefix for text completion tasks.
The model can accurately predict and complete missing text segments.
Denoising
Noisy text restoration
Processes text inputs containing noise or missing segments.
The model can effectively restore the original text content.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase