T

Tucano 2b4

Developed by TucanoBR
Tucano-2b4 is a large - scale language model that is natively pre - trained specifically for Portuguese. It is based on the Transformer architecture and trained on the GigaVerbo dataset with 200 billion tokens.
Downloads 1,478
Release Time : 10/16/2024

Model Overview

The Tucano series of models focuses on Portuguese text generation tasks and supports long - context processing of 4096 tokens. It is suitable for research and development related to Portuguese.

Model Features

Native Portuguese pre - training
Optimally trained specifically for Portuguese, enabling better handling of Portuguese - related tasks.
Large - scale dataset training
Trained on the GigaVerbo dataset containing 200 billion tokens, learning rich language knowledge.
Long - context processing ability
Supports a context length of 4096 tokens, capable of handling more complex text tasks.

Model Capabilities

Portuguese text generation
Long text processing
Language model research

Use Cases

Language research
Portuguese language model research
Serves as a basic model for Portuguese language model research
Provides a controllable experimental environment for comparative research
Text generation
Portuguese content creation
Generates Portuguese articles, stories and other content
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase