đ MambaVision: A Hybrid Mamba-Transformer Vision Backbone
MambaVision is the first hybrid model for computer vision, combining the strengths of Mamba and Transformers to achieve high performance in image classification.
đ Quick Start
To start using MambaVision, you need to install the necessary requirements. Run the following command:
pip install mambavision
⨠Features
- Hybrid Architecture: The first hybrid model for computer vision that leverages the strengths of Mamba and Transformers.
- Enhanced Mamba Formulation: Redesigned the Mamba formulation to better model visual features.
- Comprehensive Ablation Study: Conducted a study on integrating Vision Transformers (ViT) with Mamba, showing that adding self - attention blocks at the final layers improves long - range spatial dependency modeling.
- Hierarchical Architecture: Introduced a family of MambaVision models with a hierarchical architecture to meet various design criteria.
- Strong Performance: Achieved a new SOTA Pareto - front in terms of Top - 1 accuracy and throughput.
đĻ Installation
It is highly recommended to install the requirements for MambaVision by running the following:
pip install mambavision
đģ Usage Examples
Basic Usage
For each model, we offer two variants for image classification and feature extraction that can be imported with 1 line of code.
Image Classification
In the following example, we demonstrate how MambaVision can be used for image classification. Given an image from the COCO dataset val set as an input:
The following snippet can be used for image classification:
from transformers import AutoModelForImageClassification
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests
model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-L2-1K", trust_remote_code=True)
model.cuda().eval()
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 224, 224)
transform = create_transform(input_size=input_resolution,
is_training=False,
mean=model.config.mean,
std=model.config.std,
crop_mode=model.config.crop_pct,
crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
outputs = model(inputs)
logits = outputs['logits']
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
The predicted label is brown bear, bruin, Ursus arctos.
Feature Extraction
MambaVision can also be used as a generic feature extractor. Specifically, we can extract the outputs of each stage of the model (4 stages) as well as the final averaged - pool features that are flattened.
The following snippet can be used for feature extraction:
from transformers import AutoModel
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests
model = AutoModel.from_pretrained("nvidia/MambaVision-L2-1K", trust_remote_code=True)
model.cuda().eval()
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 224, 224)
transform = create_transform(input_size=input_resolution,
is_training=False,
mean=model.config.mean,
std=model.config.std,
crop_mode=model.config.crop_pct,
crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
out_avg_pool, features = model(inputs)
print("Size of the averaged pool features:", out_avg_pool.size())
print("Number of stages in extracted features:", len(features))
print("Size of extracted features in stage 1:", features[0].size())
print("Size of extracted features in stage 4:", features[3].size())
đ Documentation
Model Overview
We have developed the first hybrid model for computer vision which leverages the strengths of Mamba and Transformers. Specifically, our core contribution includes redesigning the Mamba formulation to enhance its capability for efficient modeling of visual features. In addition, we conducted a comprehensive ablation study on the feasibility of integrating Vision Transformers (ViT) with Mamba. Our results demonstrate that equipping the Mamba architecture with several self - attention blocks at the final layers greatly improves the modeling capacity to capture long - range spatial dependencies. Based on our findings, we introduce a family of MambaVision models with a hierarchical architecture to meet various design criteria.
Model Performance
MambaVision demonstrates a strong performance by achieving a new SOTA Pareto - front in terms of Top - 1 accuracy and throughput.

đ§ Technical Details
- Model Design: The hybrid model combines Mamba and Transformers, with a redesigned Mamba formulation for better visual feature modeling.
- Ablation Study: The study on integrating ViT with Mamba shows that adding self - attention blocks at the final layers of Mamba improves long - range spatial dependency modeling.
- Hierarchical Architecture: The hierarchical architecture of MambaVision models is designed to meet various design criteria.
đ License
The model is released under the NVIDIA Source Code License - NC.
Additional Information
Property |
Details |
Model Type |
Image Classification |
Training Data |
ILSVRC/imagenet - 1k |
Library Name |
transformers |
Pipeline Tag |
image - classification |
License Name |
nvclv1 |
License Link |
LICENSE |