L

Llm Jp 3.1 1.8b Instruct4

Developed by llm-jp
A large language model developed by the National Institute of Informatics in Japan, built on LLM-jp-3, and significantly improved the ability to follow instructions through instruction pre-training technology.
Downloads 165
Release Time : 5/27/2025

Model Overview

The LLM-jp-3.1 series of models are Transformer-based language models that support multilingual processing, including Japanese, English, Chinese, Korean, etc., and have strong instruction-following capabilities.

Model Features

Strong instruction-following ability
By integrating instruction pre-training technology, the model's ability to follow instructions has been significantly improved.
Multilingual support
Supports the processing of multiple languages, including Japanese, English, Chinese, Korean, etc.
Multiple architecture options
Provides options for different architectures such as dense models and MoE models.

Model Capabilities

Text generation
Instruction following
Multilingual processing
Natural language understanding

Use Cases

Natural language processing
Question-answering system
Used to build multilingual question-answering systems to answer various questions raised by users.
In the MT Bench evaluation, the Japanese score is 6.30, and the English score is 5.70.
Instruction execution
Can understand and execute complex natural language instructions, suitable for automated task processing.
In the AnswerCarefully-Eval evaluation, the acceptance rate is 64.7%, and the violation rate is 24.3%.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase