O

Open Llama 3b

Developed by openlm-research
OpenLLaMA is an open-source reproduction of Meta AI's LLaMA large language model, offering pretrained models with 3B, 7B, and 13B parameter scales
Downloads 26.20k
Release Time : 6/7/2023

Model Overview

An open-weight language model trained on the RedPajama dataset, fully reproducing LLaMA's architecture and training methods, suitable for text generation and language understanding tasks

Model Features

Open-source reproduction
Fully reproduces Meta AI's LLaMA model architecture and training methods, but uses open datasets and permissive licensing
Multiple scale options
Offers three parameter-scale versions (3B, 7B, and 13B) to accommodate different computational needs
High-performance training
Trained using TPU-v4 clusters with optimized throughput reaching 2200+ tokens per second per chip
Complete training pipeline
Includes tokenizer and model weights trained from scratch, with no dependency on original LLaMA resources

Model Capabilities

Text generation
Question answering
Language understanding
Few-shot learning

Use Cases

Education & Research
Academic Q&A
Answering knowledge-based questions in scientific, historical and other fields
Performs comparably to original LLaMA in multiple evaluations
Content creation
Text continuation
Generating coherent text content based on given prompts
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase