O

Olmo 2 0425 1B

Developed by allenai
OLMo 2 1B is the smallest model in the open language model series released by the Allen Institute for Artificial Intelligence, based on OLMo-mix-1124 pre-training and further trained with the Dolmino-mix-1124 dataset during the intermediate training phase.
Downloads 13.31k
Release Time : 4/17/2025

Model Overview

OLMo 2 1B is a Transformer-based autoregressive language model designed to advance scientific research in language models and support English text generation tasks.

Model Features

Open Source
The model's code, checkpoints, training logs, and related details are fully open-sourced, facilitating research and reproducibility.
Two-stage Training
Adopts an initial pre-training and intermediate training two-stage strategy, optimizing model performance with high-quality datasets.
Quantization Support
Supports 8-bit quantization for efficient operation in resource-constrained environments.

Model Capabilities

English Text Generation
Language Model Research
Instruction Following

Use Cases

Academic Research
Scientific Research on Language Models
Used for studying the training, optimization, and evaluation methods of language models.
Text Generation
Content Creation
Generates coherent English text content.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase