# Language model
Bitnet B1 58 Large
MIT
BitNet b1.58 is a 1-bit large language model with 3 billion parameters, trained on the RedPajama dataset for 100 billion tokens.
Large Language Model
Transformers

B
1bitLLM
10.17k
95
Bitnet B1 58 3B
MIT
BitNet b1.58 is a 1.58-bit quantized large language model that achieves efficient inference by quantizing weights to ternary values {-1, 0, 1}. The model reproduces the original paper's results and was trained on 100 billion tokens from the RedPajama dataset.
Large Language Model
Transformers

B
1bitLLM
1,109
249
Vda Fine Tuned 2
This model is a fine-tuned version of GroNLP/gpt2-small-italian, suitable for Italian text generation tasks.
Large Language Model
Transformers

V
calogero-jerik-scozzaro
15
1
Ptt5 Small Portuguese Keyword Extractor V2
MIT
This is a model that supports Portuguese, but its specific functions and uses are not clearly stated.
Large Language Model
Transformers Other

P
cnmoro
26
1
Ernie 3.0 Xbase Zh
ERNIE 3.0 is a large-scale knowledge-enhanced pre-trained model for language understanding and generation, developed by Baidu.
Large Language Model
Transformers Chinese

E
nghuyong
14.27k
20
Roberta Base Serbian
This is a Serbian (Cyrillic and Latin scripts) RoBERTa model pretrained on srWaC, suitable for downstream task fine-tuning.
Large Language Model
Transformers Other

R
KoichiYasuoka
20
1
Question Intimacy
Large Language Model English
Q
pedropei
92
0
Featured Recommended AI Models