D

Deberta V3 Japanese Large

Developed by globis-university
A large-scale DeBERTa V3 model trained on Japanese resources, optimized for Japanese language processing without requiring a morphological analyzer and respecting word boundaries.
Downloads 519.17k
Release Time : 9/21/2023

Model Overview

This is a DeBERTa V3 model trained on Japanese resources, featuring optimizations for Japanese language processing. It does not require a morphological analyzer during inference and respects word boundaries to a certain extent.

Model Features

Japanese Optimization
Designed specifically for Japanese, enabling inference without the need for a morphological analyzer.
Word Boundary Respect
Tokens do not cross word boundaries, avoiding the generation of cross-word tokens.
Compact Vocabulary
Compared to the original DeBERTa V3's extensive vocabulary, this model uses a more compact vocabulary size.
Hugging Face Ecosystem Compatible
The tokenizer is fully compatible with the Hugging Face ecosystem.

Model Capabilities

Japanese Text Understanding
Token Classification
Natural Language Processing

Use Cases

Natural Language Processing
Japanese Text Analysis
Used for in-depth analysis and understanding of Japanese text.
Japanese Token Classification
Performs token-level classification tasks on Japanese text.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase