# Long-context Reasoning
Qwen3 30B A3B GPTQ Int4
Apache-2.0
Qwen3 is the latest version in the Tongyi Qianwen series of large language models, offering a complete suite of dense and mixture-of-experts (MoE) models, achieving breakthrough progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model
Transformers

Q
Qwen
5,004
9
Phi 4 Reasoning Plus
MIT
Phi-4 Reasoning Enhanced is a 14-billion parameter open-source reasoning model developed by Microsoft Research, optimized through supervised fine-tuning and reinforcement learning, focusing on advanced reasoning capabilities in mathematics, science, and programming fields.
Large Language Model
Transformers Supports Multiple Languages

P
unsloth
189
2
Qwen3 30B A3B Base
Apache-2.0
Qwen3-30B-A3B-Base is the latest 30.5B parameter-scale Mixture of Experts (MoE) large language model in the Qwen series, supporting 119 languages and 32k context length.
Large Language Model
Transformers

Q
Qwen
9,745
33
Granite 3.3 2b Instruct GGUF
Apache-2.0
IBM-Granite's 2-billion parameter instruction model supporting multilingual and long-context tasks with structured reasoning capabilities.
Large Language Model
G
lmstudio-community
444
2
Llama 4 Maverick 17B 128E Instruct FP8
Other
The Llama 4 series is a multimodal AI model developed by Meta, supporting text and image interactions, utilizing a Mixture of Experts (MoE) architecture, and delivering industry-leading performance in text and image comprehension.
Text-to-Image
Transformers Supports Multiple Languages

L
meta-llama
64.29k
107
Raptor X5 UIGEN
Apache-2.0
Raptor-X5-UIGEN is a large language model based on the Qwen 2.5 14B multimodal architecture, specializing in UI design, minimalist coding, and content-intensive development, with enhanced reasoning capabilities and structured response generation.
Large Language Model
Transformers English

R
prithivMLmods
17
2
Mixtral 8x7B Instruct V0.1
Apache-2.0
Mixtral-8x7B is a pre-trained generative sparse mixture of experts model that outperforms Llama 2 70B on most benchmarks.
Large Language Model
Transformers Supports Multiple Languages

M
mistralai
505.97k
4,397
Flan Ul2
Apache-2.0
An encoder-decoder model based on the T5 architecture, optimized through Flan prompt tuning, supporting multilingual task processing
Large Language Model
Transformers Supports Multiple Languages

F
google
3,350
554
Featured Recommended AI Models