Luke Japanese Base Lite
The lightweight version of LUKE Japanese Base, a pre-trained knowledge-enhanced contextual word and entity representation model optimized for Japanese tasks.
Downloads 403
Release Time : 10/25/2022
Model Overview
The Japanese version of LUKE (Language Understanding with Knowledge-based Embeddings), which treats words and entities in text as independent tokens and outputs their context-dependent representations. The lite version does not include Wikipedia entity embeddings.
Model Features
Knowledge-Enhanced Representation
Combines entity information with contextual representations to enhance the model's understanding of entity-related tasks.
Lightweight Design
Removes Wikipedia entity embeddings to reduce model size, making it suitable for tasks that do not require processing Wikipedia entities.
Japanese Optimization
Specifically optimized for Japanese language characteristics, achieving excellent performance on the JGLUE benchmark.
Model Capabilities
Text Understanding
Entity Recognition
Relation Classification
Question Answering
Semantic Similarity Calculation
Use Cases
Natural Language Processing
Sentiment Analysis
Analyze the sentiment polarity of Japanese text.
Achieved 96.5% accuracy on the MARC-ja task.
Semantic Similarity Calculation
Calculate the semantic similarity between two Japanese sentences.
Achieved a Pearson correlation coefficient of 0.916 on the JSTS task.
Natural Language Inference
Determine the logical relationship between two Japanese sentences.
Achieved 91.2% accuracy on the JNLI task.
Question Answering
Commonsense Question Answering
Answer commonsense-based Japanese questions.
Achieved 84.2% accuracy on the JCommonsenseQA task.
Featured Recommended AI Models