C

CONCH

Developed by MahmoodLab
CONCH is a vision-language foundation model for histopathology, pre-trained on 1.17 million pathology image-text pairs, demonstrating state-of-the-art performance in 14 computational pathology tasks.
Downloads 12.76k
Release Time : 1/5/2024

Model Overview

CONCH is a vision-language foundation model specifically designed for histopathology, capable of processing pathology images and text, suitable for various tasks including image classification, text-to-image retrieval, image-to-text retrieval, description generation, and tissue segmentation.

Model Features

Multimodal Capability
Capable of processing both pathology images and text, supporting various cross-modal tasks.
Broad Applicability
Not only applicable to H&E-stained images but also capable of generating more expressive feature representations for non-H&E-stained images (e.g., immunohistochemistry and special stains).
Low Contamination Risk
Pre-training did not use publicly available pathology datasets such as TCGA, PAIP, or GTEX, minimizing the risk of contamination for public benchmarks or private pathology datasets.

Model Capabilities

Image feature extraction
Text feature extraction
Image classification
Text-to-image retrieval
Image-to-text retrieval
Description generation
Tissue segmentation

Use Cases

Computational Pathology
Zero-shot ROI Classification
Classify regions of interest in pathology images without additional training.
Zero-shot ROI Image-Text Bidirectional Retrieval
Supports bidirectional retrieval between pathology images and text.
MI-Zero-based Zero-shot WSI Classification
Supports zero-shot classification of whole slide images.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase