# Multilingual Reasoning

Qwen3 0.6B GGUF
Apache-2.0
Qwen3-0.6B is the latest 0.6B-parameter large language model in the Qwen series, supporting mode switching between reasoning and non-reasoning modes, with powerful reasoning, instruction-following, and multilingual capabilities.
Large Language Model
Q
QuantFactory
317
1
Falcon H1 34B Instruct GPTQ Int8
Other
Falcon-H1 is a high-performance hybrid architecture language model developed by TII, combining the strengths of Transformers and Mamba architectures, supporting English and multilingual tasks.
Large Language Model Transformers
F
tiiuae
105
3
Qwen3 30B A3B GGUF
Apache-2.0
Qwen3 is the latest generation of large language models in the Qwen series, offering a range of dense and mixture-of-experts (MoE) models, achieving breakthrough progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model English
Q
eaddario
371
2
Qwen3 235B A22B GPTQ Int4
Apache-2.0
Qwen3 is the latest generation of large language models in the Qwen series, offering a range of dense and mixture-of-experts (MoE) models. Through extensive training, Qwen3 has achieved groundbreaking progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model Transformers
Q
Qwen
1,563
9
Qwen3 235B A22B
Apache-2.0
Qwen3 is the latest generation of large language models in the Qwen series, offering a range of dense and Mixture of Experts (MoE) models. Based on extensive training, Qwen3 has achieved groundbreaking progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model Transformers
Q
unsloth
421
2
Qwen3 1.7B GPTQ Int8
Apache-2.0
Qwen3 is the latest version in the Tongyi Qianwen series of large language models, offering a 1.7B-parameter GPTQ 8-bit quantized model that supports switching between reasoning and non-reasoning modes, enhancing inference capabilities and multilingual support.
Large Language Model Transformers
Q
Qwen
635
1
Qwen3 1.7B GGUF
Apache-2.0
The latest version of the Tongyi Qianwen series of large language models, supporting switching between thinking and non-thinking modes, with powerful reasoning, multilingual, and agent capabilities.
Large Language Model
Q
Qwen
1,180
1
Qwen3 4B GGUF
Apache-2.0
Qwen3 is the latest version of the Tongyi Qianwen series of large language models, offering a range of dense and mixture-of-experts (MoE) models. Based on large-scale training, Qwen3 has achieved breakthrough progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model
Q
Qwen
4,225
6
Qwen3 235B A22B AWQ
Apache-2.0
Qwen3-235B-A22B is the latest generation large language model in the Qwen series, adopting a Mixture of Experts (MoE) architecture with 235 billion parameters and 22 billion active parameters. It excels in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model Transformers
Q
cognitivecomputations
2,563
9
Falcon H1 7B Instruct
Other
Falcon-H1 is an efficient hybrid architecture language model developed by TII, combining the strengths of Transformers and Mamba architectures, supporting English and multilingual tasks.
Large Language Model Transformers
F
tiiuae
4,246
7
Falcon H1 3B Instruct
Other
Falcon-H1 is a causal decoder-only language model developed by TII with a hybrid Transformers+Mamba architecture, supporting English and multilingual tasks.
Large Language Model Transformers
F
tiiuae
380
4
Falcon H1 1.5B Deep Instruct
Other
Falcon-H1 is a causal decoder model developed by the UAE's Technology Innovation Institute, featuring a hybrid Transformer and Mamba architecture, supporting English and multilingual tasks.
Large Language Model Transformers
F
tiiuae
987
10
Falcon H1 1.5B Instruct
Other
Falcon-H1 is an efficient hybrid architecture language model developed by TII, combining the strengths of Transformers and Mamba architectures, supporting English and multilingual tasks.
Large Language Model Transformers
F
tiiuae
1,022
4
Falcon H1 7B Base
Other
Falcon-H1 is a causal decoder-only language model with a hybrid Transformers + Mamba architecture developed by TII, supporting multilingual processing with excellent performance.
Large Language Model Transformers Supports Multiple Languages
F
tiiuae
227
1
Falcon H1 1.5B Base
Other
Falcon-H1 is a decoder-only causal model with a hybrid Transformers + Mamba architecture developed by TII, supporting English and multilingual tasks.
Large Language Model Transformers Supports Multiple Languages
F
tiiuae
454
2
Qwen3 14B 128K GGUF
Apache-2.0
Qwen3 is the latest generation of large language models in the Qwen series, offering a range of dense and mixture-of-experts (MoE) models. Based on extensive training, Qwen3 has achieved breakthrough progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model English
Q
unsloth
10.20k
13
Qwen3 30B A3B 128K GGUF
Apache-2.0
Qwen3 is the latest generation of large language models in the Tongyi Qianwen series, offering a complete system of dense and mixture-of-experts (MoE) models. Based on extensive training, Qwen3 achieves breakthrough progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model English
Q
unsloth
48.68k
43
Qwen3 8B 128K GGUF
Apache-2.0
Qwen3 is the latest 8B-parameter version in the Tongyi Qianwen series of large language models, supporting switching between thinking and non-thinking modes, featuring 128K context length and exceptional multilingual capabilities.
Large Language Model English
Q
unsloth
15.29k
14
Qwen3 235B A22B 128K GGUF
Apache-2.0
Qwen3 is the latest generation large language model in the Tongyi Qianwen series, offering a complete suite of dense and Mixture of Experts (MoE) models. Based on large-scale training, Qwen3 has achieved breakthrough progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model English
Q
unsloth
310.66k
26
Qwen3 235B A22B GGUF
Apache-2.0
Qwen3 is the latest generation of large language models in the Qwen series, offering a range of dense and mixture of experts (MoE) models. Based on extensive training, Qwen3 has achieved breakthrough progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model English
Q
unsloth
75.02k
48
Qwen3 30B A3B FP8
Apache-2.0
Qwen3 is the latest generation of large language models in the Tongyi Qianwen series, offering a complete suite of dense and mixture-of-experts (MoE) models. Through large-scale training, Qwen3 has achieved breakthroughs in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model Transformers
Q
Qwen
107.85k
57
Qwen3 4B 128K GGUF
Apache-2.0
Qwen3-4B is the latest generation large language model in the Qwen series with 4B parameters, supporting over 100 languages, excelling in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model English
Q
unsloth
15.41k
17
Qwen3 4B FP8
Apache-2.0
Qwen3-4B-FP8 is the latest large language model in the Qwen series, offering a 4-billion-parameter FP8 quantized version that supports switching between thinking and non-thinking modes, excelling in reasoning, instruction following, and agent capabilities.
Large Language Model Transformers
Q
Qwen
23.95k
22
Qwen3 1.7B Unsloth Bnb 4bit
Apache-2.0
Qwen3-1.7B is the 1.7B parameter version of the latest generation in the Qwen series of large language models, supporting mode switching, multilingual processing, and agent capabilities.
Large Language Model Transformers English
Q
unsloth
40.77k
4
Qwen3 1.7B GGUF
Apache-2.0
Qwen3-1.7B is the latest generation of the Qwen series with 1.7B parameters, supporting switching between thinking and non-thinking modes, featuring enhanced reasoning capabilities and multilingual support.
Large Language Model English
Q
unsloth
28.55k
16
Qwen3 0.6B Unsloth Bnb 4bit
Apache-2.0
Qwen3 is the latest generation of the Qwen series large language model, offering a comprehensive set of dense and mixture-of-experts (MoE) models. Based on extensive training, Qwen3 achieves groundbreaking progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model Transformers English
Q
unsloth
50.36k
7
Qwen3 0.6B GGUF
Apache-2.0
Qwen3-0.6B is a 0.6B-parameter large language model developed by Alibaba Cloud, the latest member of the Qwen3 series, supporting over 100 languages with strong reasoning, instruction-following, and multilingual capabilities.
Large Language Model English
Q
unsloth
53.56k
41
Qwen3 14B Unsloth Bnb 4bit
Apache-2.0
Qwen3 is the latest generation of large language models in the Tongyi Qianwen series, offering both dense models and mixture-of-experts (MoE) models. Through large-scale training, Qwen3 achieves breakthrough progress in reasoning capabilities, instruction following, agent functionalities, and multilingual support.
Large Language Model Transformers English
Q
unsloth
68.67k
5
Qwen3 14B GGUF
Apache-2.0
Qwen3 is the latest large language model developed by Alibaba Cloud, featuring powerful reasoning, instruction-following, and multilingual support capabilities, with the ability to switch between thinking and non-thinking modes.
Large Language Model English
Q
unsloth
81.29k
40
Qwen3 32B GGUF
Apache-2.0
Qwen3 is the latest version of Alibaba Cloud's large-scale language model series, featuring exceptional reasoning, instruction-following, and multilingual support capabilities. The 32B version is one of its dense models, supporting switching between thinking and non-thinking modes.
Large Language Model English
Q
unsloth
123.35k
57
Qwen3 4B Unsloth Bnb 4bit
Apache-2.0
Qwen3-4B is the latest generation of the Qwen series large language model, featuring 4B parameters and supporting over 100 languages, with outstanding performance in reasoning, instruction following, and agent capabilities.
Large Language Model Transformers English
Q
unsloth
72.86k
5
Qwen3 4B GGUF
Apache-2.0
Qwen3-4B is the latest generation large language model in the Qwen series with 4B parameters, supporting over 100 languages and demonstrating exceptional reasoning, instruction following, and agent capabilities.
Large Language Model English
Q
unsloth
59.40k
32
Qwen 2.5 7B Reasoning
MIT
A fine-tuned version based on Qwen/Qwen2.5-7B-Instruct, specifically optimized for advanced reasoning tasks
Large Language Model Transformers English
Q
HyperX-Sen
70
3
Llama 3.1 Storm 8B GGUF
Llama-3.1-Storm-8B is an improved model based on Llama-3.1-8B-Instruct, demonstrating excellent performance in multiple benchmarks and is suitable for dialogue and function calling tasks.
Large Language Model Supports Multiple Languages
L
akjindal53244
654
41
Wizardlm 2 8x22B
Apache-2.0
WizardLM-2 8x22B is the state-of-the-art Mixture of Experts (MoE) model developed by Microsoft's WizardLM team, with significant performance improvements in complex dialogues, multilingual tasks, reasoning, and agent tasks.
Large Language Model Transformers
W
dreamgen
28
31
14B
A 14B-parameter causal language model fully compatible with Meta LLaMA 2 architecture, outperforming all sub-70B models in multiple benchmarks
Large Language Model Transformers Supports Multiple Languages
1
CausalLM
236
303
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase