AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Interpretability research

# Interpretability research

Pythia 2.8b
Apache-2.0
Pythia - 2.8 billion is a member of the scalable language model suite developed by EleutherAI, designed specifically to promote the interpretability research of large language models. This model is based on the Transformer architecture and is trained on the The Pile dataset, with 2.8 billion parameters.
Large Language Model Transformers English
P
EleutherAI
40.38k
30
Pythia 160m
Apache-2.0
Pythia-160M is a language model dedicated to interpretability research developed by EleutherAI. It belongs to the 160M parameter scale version in the Pythia suite and is based on the Transformer architecture, trained on the Pile dataset.
Large Language Model Transformers English
P
EleutherAI
163.75k
31
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase