C

Cerebras GPT 2.7B

Developed by cerebras
Cerebras-GPT 2.7B is a language model based on the Transformer architecture, aiming to support the research of large language models and can serve as a basic model in fields such as natural language processing.
Downloads 269
Release Time : 3/20/2023

Model Overview

Cerebras-GPT 2.7B is a language model based on the Transformer architecture, mainly used for natural language processing tasks, such as text generation and language understanding. It is part of the Cerebras-GPT model family, which includes models of various scales, ranging from 111M to 13B parameters.

Model Features

Rich model family
The Cerebras-GPT family includes models of various scales, such as 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B.
Follow the scaling law
All models are trained according to the Chinchilla scaling law, that is, each model parameter corresponds to 20 tokens, achieving optimal calculation.
Efficient training
With Cerebras' weight streaming technology, the training process of large language models is simplified, and efficient scaling across nodes is achieved.

Model Capabilities

Text generation
Language understanding
Natural language processing

Use Cases

Research
Large language model research
Used as a basic model to study the scaling law and training methods of large language models.
Natural language processing
Text generation
Used to generate coherent text content.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase