Llama3.1 MOE 4X8B Gated IQ Multi Tier Deep Reasoning 32B GGUF
Apache-2.0
A Mixture of Experts (MoE) model based on the Llama 3.1 architecture, featuring gated IQ and multi-tier deep reasoning capabilities, supporting 128k context length and multiple languages.
Large Language Model Supports Multiple Languages