# Multimodal MoE Architecture
Llama 4 Scout 17B 4E Instruct
Llama 4 Scout is a 17-billion-parameter multimodal model with a Mixture of Experts (MoE) architecture, introduced by Meta. It supports 12 languages and image understanding, featuring a topk=4 expert dynamic fusion mechanism.
Large Language Model
Transformers Supports Multiple Languages

L
shadowlilac
53
1
Llama 4 Scout 17B 16E Instruct Bnb 8bit
Other
The Llama 4 series is a multimodal AI model developed by Meta, supporting text and image interaction, utilizing a Mixture of Experts (MoE) architecture, and demonstrating leading performance in text and image comprehension.
Multimodal Fusion
Transformers Supports Multiple Languages

L
bnb-community
132
1
Featured Recommended AI Models