AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
MoE Mixture of Experts

# MoE Mixture of Experts

Qwen3 235B A22B INT4 W4A16
Apache-2.0
Qwen3 is the latest generation large language model in the Tongyi Qianwen series, a 235B parameter Mixture of Experts (MoE) model, significantly reducing memory usage after INT4 quantization
Large Language Model Transformers
Q
justinjja
4,234
6
Minicpm MoE 8x2B
MiniCPM-MoE-8x2B is a Transformer-based Mixture of Experts (MoE) language model, designed with 8 expert modules where each token activates 2 experts for processing.
Large Language Model Transformers
M
openbmb
6,377
41
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase