AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Multimodal MoE Architecture

# Multimodal MoE Architecture

Llama 4 Scout 17B 4E Instruct
Llama 4 Scout is a 17-billion-parameter multimodal model with a Mixture of Experts (MoE) architecture, introduced by Meta. It supports 12 languages and image understanding, featuring a topk=4 expert dynamic fusion mechanism.
Large Language Model Transformers Supports Multiple Languages
L
shadowlilac
53
1
Llama 4 Scout 17B 16E Instruct Bnb 8bit
Other
The Llama 4 series is a multimodal AI model developed by Meta, supporting text and image interaction, utilizing a Mixture of Experts (MoE) architecture, and demonstrating leading performance in text and image comprehension.
Multimodal Fusion Transformers Supports Multiple Languages
L
bnb-community
132
1
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase