# 32K context
Qwen3 Reranker 4B Seq
Apache-2.0
Qwen3-Reranker-4B is the latest text re-ranking model with a 4B parameter scale launched by the Tongyi family. It supports over 100 languages and performs excellently in text retrieval tasks.
Text Embedding
Transformers

Q
michaelfeil
122
1
Qwen3 4B Base
Apache-2.0
Qwen3-4B-Base is the latest generation of the Qwen series' 4-billion-parameter large language model, pre-trained on 36 trillion tokens of multilingual data, supporting a 32k context length.
Large Language Model
Transformers

Q
Qwen
50.84k
29
Orcamaid V3 13B 32k GGUF
Other
Orcamaid v3 13B 32K is a large language model based on the Llama architecture, supporting a sequence length of 32K and suitable for text generation tasks.
Large Language Model
Transformers

O
TheBloke
163
17
Nanbeige 16B Base 32K GGUF
Apache-2.0
Nanbeige 16B Base 32K is a large language model developed by Nanbeige LLM Lab, supporting both Chinese and English with a 32K context length, suitable for various text generation tasks.
Large Language Model Supports Multiple Languages
N
TheBloke
1,451
4
Featured Recommended AI Models