L

Line Distilbert Base Japanese

Developed by line-corporation
DistilBERT model pre-trained on 131GB of Japanese web text, developed by LINE Corporation
Downloads 12.92k
Release Time : 3/9/2023

Model Overview

This is a lightweight Japanese BERT model suitable for various natural language processing tasks such as text classification, question answering, and semantic understanding.

Model Features

Lightweight and Efficient
40% fewer parameters than the full BERT model while maintaining over 90% performance
Professional Japanese Processing
Uses MeCab with Unidic dictionary for professional Japanese tokenization
Extensive Pre-training
Pre-trained on 131GB of Japanese web text

Model Capabilities

Japanese Text Understanding
Text Classification
Question Answering Systems
Semantic Similarity Calculation
Masked Language Modeling

Use Cases

Enterprise Applications
Corporate Document Processing
Used for processing and analyzing internal Japanese documents within enterprises
Research and Development
Natural Language Processing Research
Used as a base model for Japanese NLP-related research
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase