Roberta Large Japanese Aozora
RoBERTa large Japanese model pretrained on Aozora Bunko texts, supports downstream task fine-tuning
Downloads 17
Release Time : 3/2/2022
Model Overview
This is a RoBERTa large Japanese model pretrained on Aozora Bunko texts using Japanese-LUW-Tokenizer, applicable for downstream tasks such as part-of-speech tagging and dependency parsing.
Model Features
Pretrained on Aozora Bunko
Pretrained using high-quality text data from Japan's Aozora Bunko, featuring rich linguistic characteristics
Supports Downstream Task Fine-tuning
Can be fine-tuned for various NLP tasks including part-of-speech tagging and dependency parsing
Professional Tokenizer Support
Utilizes Japanese-LUW-Tokenizer for text segmentation, better suited for Japanese text processing
Model Capabilities
Japanese Text Understanding
Masked Language Modeling
Part-of-Speech Tagging
Dependency Parsing
Use Cases
Natural Language Processing
Japanese Text Analysis
Used for analyzing grammatical structures and semantic relationships in Japanese texts
Japanese Language Teaching Aid
Can serve as a Japanese learning tool to help understand complex sentence structures
Featured Recommended AI Models