# 1.5B Parameter Scale
Rwkv7 1.5B World
Apache-2.0
The RWKV-7 model adopts a flash linear attention architecture and supports multilingual text generation tasks.
Large Language Model
Transformers Supports Multiple Languages

R
fla-hub
632
9
Chinese Text Correction 1.5b
Apache-2.0
Qwen2.5-1.5B-Instruct is a 1.5 billion parameter Chinese instruction fine-tuned model based on the Qwen2.5 architecture, suitable for text generation and reasoning tasks.
Large Language Model
Transformers Chinese

C
shibing624
1,085
9
Ldir Qwen2 Reranker 1.5B
Apache-2.0
A downstream task model based on Qwen2-1.5B, specializing in re-ranking tasks, excelling in Chinese medical Q&A and general text re-ranking tasks.
Text Embedding
Transformers Supports Multiple Languages

L
neofung
51
3
Open Australian Legal Llm
Apache-2.0
The largest open-source language model trained on Australian law, with 1.5 billion parameters, suitable for natural language processing tasks in the Australian legal domain.
Large Language Model
Transformers Supports Multiple Languages

O
isaacus
185
6
Deberta V2 Xxlarge
MIT
DeBERTa V2 XXLarge is an improved BERT model based on disentangled attention and enhanced mask decoding, with 1.5 billion parameters, surpassing BERT and RoBERTa performance on multiple natural language understanding tasks
Large Language Model
Transformers English

D
microsoft
9,179
33
Featured Recommended AI Models