# Long context understanding
Qwen3 Embedding 8B Auto
Apache-2.0
The Qwen3 Embedding model series is the latest self-developed model of the Tongyi family, designed specifically for text embedding and ranking tasks, supporting more than 100 languages, and ranking first on the MTEB multilingual leaderboard.
Text Embedding
Q
michaelfeil
135
1
Qwen3 30B A3B Base
Apache-2.0
Qwen3-30B-A3B-Base is the latest generation of large language models in the Qwen series, with many improvements in training data, model architecture, and optimization techniques, providing more powerful language processing capabilities.
Large Language Model
Transformers

Q
unsloth
1,822
3
Qwen3 8B Base
Apache-2.0
Qwen3-8B-Base is the latest generation of Tongyi's large model series, with 8.2 billion parameters and support for 119 languages. It is suitable for a variety of natural language processing tasks.
Large Language Model
Transformers

Q
unsloth
5,403
1
Qwen3 1.7B Base
Apache-2.0
Qwen3-1.7B-Base is the latest generation of large language models in the Tongyi series, offering a range of dense models and mixture-of-experts (MoE) models. It has made significant improvements in training data, model architecture, and optimization techniques.
Large Language Model
Transformers

Q
unsloth
7,444
2
Qwen3 0.6B Base Unsloth Bnb 4bit
Apache-2.0
Qwen3-0.6B-Base is the latest generation of large language models in the Tongyi series. It has a parameter scale of 0.6B, supports 119 languages, and has a context length of up to 32,768 tokens.
Large Language Model
Transformers

Q
unsloth
10.84k
1
Qwen3 0.6B Base
Apache-2.0
Qwen3-0.6B-Base is the latest generation of large language models in the Tongyi Qianwen series, offering a range of dense models and Mixture of Experts (MoE) models.
Large Language Model
Transformers

Q
unsloth
10.84k
2
Internvl3 8B
Other
InternVL3-8B is an advanced multimodal large language model with excellent multimodal perception and reasoning capabilities, and performs well in multiple fields such as tool use, GUI agents, and industrial image analysis.
Multimodal Fusion
Transformers Other

I
FriendliAI
167
0
Chinese Llama 2 13b 16k
Apache-2.0
A complete Chinese LLaMA-2-13B-16K model that supports a 16K context length and can be directly loaded for inference and full-parameter training
Large Language Model
Transformers Supports Multiple Languages

C
hfl
10.62k
14
Featured Recommended AI Models