# Juman++ Tokenization
Bigbird Base Japanese
A Japanese BigBird model pre-trained on Japanese Wikipedia, CC-100, and OSCAR datasets, suitable for long sequence processing tasks.
Large Language Model
Transformers Japanese

B
nlp-waseda
38
5
Deberta V2 Base Japanese
A Japanese DeBERTa V2 base model pretrained on Japanese Wikipedia, CC-100, and OSCAR corpora, suitable for masked language modeling and downstream task fine-tuning.
Large Language Model
Transformers Japanese

D
ku-nlp
38.93k
29
Roberta Large Japanese
A large Japanese RoBERTa model pretrained on Japanese Wikipedia and the Japanese portion of CC-100, suitable for Japanese natural language processing tasks.
Large Language Model
Transformers Japanese

R
nlp-waseda
227
23
Roberta Base Japanese
A Japanese RoBERTa-based pretrained model, trained on Japanese Wikipedia and the Japanese portion of CC-100.
Large Language Model
Transformers Japanese

R
nlp-waseda
456
32
Featured Recommended AI Models