# Three-stage pre-training
Qwen3 30B A3B Base
Apache-2.0
Qwen3-30B-A3B-Base is the latest generation of large language models in the Qwen series, with many improvements in training data, model architecture, and optimization techniques, providing more powerful language processing capabilities.
Large Language Model
Transformers

Q
unsloth
1,822
3
Qwen3 14B Base
Apache-2.0
Qwen3-14B-Base is the latest generation of the Tongyi series of large language models, providing a comprehensive set of dense and mixture-of-experts (MoE) models with significant improvements in training data, model architecture, and optimization techniques.
Large Language Model
Transformers

Q
unsloth
4,693
1
Qwen3 1.7B Base
Apache-2.0
Qwen3-1.7B-Base is the latest generation of large language models in the Tongyi series, offering a range of dense models and mixture-of-experts (MoE) models. It has made significant improvements in training data, model architecture, and optimization techniques.
Large Language Model
Transformers

Q
unsloth
7,444
2
Qwen3 0.6B Base Unsloth Bnb 4bit
Apache-2.0
Qwen3-0.6B-Base is the latest generation of large language models in the Tongyi series. It has a parameter scale of 0.6B, supports 119 languages, and has a context length of up to 32,768 tokens.
Large Language Model
Transformers

Q
unsloth
10.84k
1
Qwen3 0.6B Base
Apache-2.0
Qwen3-0.6B-Base is the latest generation of large language models in the Tongyi Qianwen series, offering a range of dense models and Mixture of Experts (MoE) models.
Large Language Model
Transformers

Q
unsloth
10.84k
2
Qwen3 1.7B Base
Apache-2.0
Qwen3-1.7B is the latest 1.7 billion parameter base language model in the Qwen series, featuring a three-stage pre-training system and supporting 32k context length.
Large Language Model
Transformers

Q
Qwen
19.24k
19
Featured Recommended AI Models