๐ Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation
Janus is a novel autoregressive framework that unifies multimodal understanding and generation. It decouples visual encoding, enhancing flexibility and performance, making it a strong candidate for next - generation unified multimodal models.
๐ Quick Start
Please refer to Github Repository
โจ Features
- Unified Framework: Janus unifies multimodal understanding and generation in a single autoregressive framework.
- Decoupled Visual Encoding: It decouples visual encoding into separate pathways, alleviating conflicts and enhancing flexibility.
- High Performance: Surpasses previous unified models and matches or exceeds task - specific models.
๐ Documentation
๐ Update
2024.10.20: We have uploaded the correct tokenizer_config.json
. The previous file was missing the pad_token
, which caused poor visual generation results.
๐ Introduction
Janus is a novel autoregressive framework that unifies multimodal understanding and generation. It addresses the limitations of previous approaches by decoupling visual encoding into separate pathways, while still utilizing a single, unified transformer architecture for processing. The decoupling not only alleviates the conflict between the visual encoderโs roles in understanding and generation, but also enhances the frameworkโs flexibility. Janus surpasses previous unified model and matches or exceeds the performance of task - specific models. The simplicity, high flexibility, and effectiveness of Janus make it a strong candidate for next - generation unified multimodal models.
Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation
Github Repository
๐ Model Summary
Janus is a unified understanding and generation MLLM, which decouples visual encoding for multimodal understanding and generation. Janus is constructed based on the DeepSeek - LLM - 1.3b - base which is trained on an approximate corpus of 500B text tokens.
For multimodal understanding, it uses the [SigLIP - L](https://huggingface.co/timm/ViT - L - 16 - SigLIP - 384) as the vision encoder, which supports 384 x 384 image input. For image generation, Janus uses the tokenizer from here with a downsample rate of 16.
๐ License
This code repository is licensed under the MIT License. The use of Janus models is subject to DeepSeek Model License.
๐ Citation
@misc{wu2024janus,
title={Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation},
author={Chengyue Wu and Xiaokang Chen and Zhiyu Wu and Yiyang Ma and Xingchao Liu and Zizheng Pan and Wen Liu and Zhenda Xie and Xingkai Yu and Chong Ruan and Ping Luo},
year={2024},
eprint={2410.13848},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2410.13848},
}
๐ Contact
If you have any questions, please raise an issue or contact us at service@deepseek.com.
๐ Information Table
Property |
Details |
Model Type |
Unified multimodal understanding and generation MLLM |
Training Data |
Approximate corpus of 500B text tokens |
Vision Encoder for Understanding |
[SigLIP - L](https://huggingface.co/timm/ViT - L - 16 - SigLIP - 384) (supports 384 x 384 image input) |
Tokenizer for Image Generation |
From here with a downsample rate of 16 |
License for Code Repository |
MIT License |
License for Models |
DeepSeek Model License |