Virchow2
Model Overview
Model Features
Model Capabilities
Use Cases
## 🚀 Virchow2 Model
*Virchow2 is a self - supervised vision transformer. Pretrained with 3.1M whole slide histopathology images, it can serve as a tile - level feature extractor for various computational pathology tasks.*
## 🚀 Quick Start
### Requirements
- PyTorch (2.0+ recommended)
- timm (>= 0.9.11 required)
- huggingface_hub
### Login
After gaining access to the model, you need to log in to HuggingFace in the environment where you plan to use the model. You can do this from the command line:
huggingface-cli login
or in your Python code:
```python
from huggingface_hub import login
login()
Please refer to the official HuggingFace documentation for more details.
✨ Features
- Feature Extraction: Can be used as a tile - level feature extractor (either frozen or finetuned) to achieve state - of - the - art results for a wide variety of downstream computational pathology use cases.
- Adaptability: Can be finetuned to adapt to specific tasks and/or datasets.
📦 Installation
The installation mainly involves installing the required libraries:
- PyTorch (2.0+ recommended)
- timm (>= 0.9.11 required)
- huggingface_hub
You can use pip
to install these libraries:
pip install torch timm huggingface_hub
💻 Usage Examples
Basic Usage
import timm
import torch
from timm.data import resolve_data_config
from timm.data.transforms_factory import create_transform
from timm.layers import SwiGLUPacked
from PIL import Image
# need to specify MLP layer and activation function for proper init
model = timm.create_model("hf - hub:paige - ai/Virchow2", pretrained=True, mlp_layer=SwiGLUPacked, act_layer=torch.nn.SiLU)
model = model.eval()
transforms = create_transform(**resolve_data_config(model.pretrained_cfg, model=model))
image = Image.open("/path/to/your/image.png")
image = transforms(image).unsqueeze(0) # size: 1 x 3 x 224 x 224
output = model(image) # size: 1 x 261 x 1280
class_token = output[:, 0] # size: 1 x 1280
patch_tokens = output[:, 5:] # size: 1 x 256 x 1280, tokens 1 - 4 are register tokens so we ignore those
# concatenate class token and average pool of patch tokens
embedding = torch.cat([class_token, patch_tokens.mean(1)], dim=-1) # size: 1 x 2560
We concatenate the class token and the mean patch token to create the final tile embedding. In more resource - constrained settings, one can experiment with using just the class token or the mean patch token. For downstream tasks with dense outputs (i.e. segmentation), the 256 x 1280
tensor of patch tokens can be used.
Advanced Usage
We highly recommend running the model on a GPU in mixed precision (fp16
) using torch.autocast
:
model = model.to("cuda")
image = image.to("cuda")
with torch.inference_mode(), torch.autocast(device_type="cuda", dtype=torch.float16):
output = model(image)
class_token = output[:, 0]
patch_tokens = output[:, 5:]
embedding = torch.cat([class_token, patch_tokens.mean(1)], dim=-1)
# the model output will be fp32 because the final operation is a LayerNorm that is ran in mixed precision
# optionally, you can convert the embedding to fp16 for efficiency in downstream use
embedding = embedding.to(torch.float16)
📚 Documentation
Model Details
- Developed by: Paige, NYC, USA and Microsoft Research, Cambridge, MA USA
| Property | Details |
|----------|---------|
| Model Type | Image feature backbone |
| Model Stats | Params (M): 632; Image size: 224 x 224 |
| Model Architecture | Architecture: ViT - H/14; Patch size: 14; Layers: 32; Embedding dimension: 1280; Activation function: SwiGLU; Attention heads: 16; LayerScale: true; Register tokens: 4 |
| Training Details | Precision: Mixed precision (
fp16
); Objective: Modified DINOv2 (https://doi.org/10.48550/arXiv.2304.07193), with KoLeo regularizer replaced with kernel density estimator and crop - and - resize augmentation replaced with extended context translation | | Paper | Virchow2: Scaling Self - Supervised Mixed Magnification Models in Pathology https://arxiv.org/pdf/2408.00738 | | Pretraining Dataset | Internal dataset of 3.1 million whole slide images from Memorial Sloan Kettering Cancer Center, tiles sampled at 2.0, 1.0, 0.5 and 0.25 microns per pixel resolution (5x, 10x, 20x, and 40x magnification) | | License | CC - BY - NC - ND - 4.0 |
Use
Direct use
Virchow2 is intended to be used as a frozen feature extractor as the foundation for tile - level and whole slide - level classifiers.
Downstream use
Virchow2 can be finetuned to adapt to specific tasks and/or datasets.
📄 License
This model and associated code are released under the CC - BY - NC - ND 4.0 license and may only be used for non - commercial, academic research purposes with proper attribution. Any commercial use, sale, or other monetization of the Virchow2 Model and its derivatives, which include models trained on outputs from the Virchow2 Model or datasets created from the Virchow2 Model, is prohibited and requires prior approval.
⚠️ Important Note
By downloading the Virchow2 Model, you attest that all information (affiliation, research use) is correct and up - to - date. Downloading the Virchow2 Model requires prior registration on Hugging Face and agreeing to the terms of use. You also agree not to distribute, publish or reproduce a copy of the Virchow2 Model. If another user within your organization wishes to use the Virchow2 Model, they must register as an individual user and agree to comply with the terms of use. If you are a commercial entity, please contact the corresponding author.
Further, by downloading the Virchow2 model, you agree you will only use the Virchow2 model for academic research purposes and will not use, or allow others to use, the Virchow2 model to:
Diagnose, cure, mitigate, treat, or prevent disease or any other conditions, including for Investigational Use Only (“IUO”), Research Use Only (“RUO”), commercial, clinical or other similar use, and including as a substitute for professional medical advice, a healthcare opinion, a diagnosis, treatment, or the clinical judgment of a healthcare professional, as no license or right is granted for any such purposes.
Re - identify the deidentified data used to develop the Virchow2 Model.
Violate the law or others’ rights, including to:
a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content.
b. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals.
c. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services.
d. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices.
e. Collect, process, disclose, generate, or infer the identity of individuals or the health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws.
f. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third - party rights, including the outputs or results of any products or services using the Virchow2 Model or any related materials.
g. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system.
Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including the use of the Virchow2 Model as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, including for Investigational Use Only (“IUO”), Research Use Only (“RUO”), commercial, clinical or similar use.
Intentionally deceive or mislead others, including representing that the use of the Virchow2 Model or its outputs is human - generated.
Further, you agree that you will appropriately disclose to end users any known dangers of your AI system.
📚 Citation
Please cite the following work if you used this model in your research.







