WSInfer-MIL is a command line tool to run pre-trained MIL models on whole slide images. It is the slide-level companion to WSInfer, which provides patch-level classification.
Caution
WSInfer-MIL is intended only for research purposes.
WSInfer-MIL can be installed using pip
. WSInfer-MIL will install PyTorch automatically
if it is not installed, but this may not install GPU-enabled PyTorch even if a GPU is available.
For this reason, install PyTorch before installing WSInfer-MIL.
Please see PyTorch's installation instructions
for help installing PyTorch. The installation instructions differ based on your operating system
and choice of pip
or conda
. Thankfully, the instructions provided
by PyTorch also install the appropriate version of CUDA. We refrain from including code
examples of installation commands because these commands can change over time. Please
refer to PyTorch's installation instructions
for the most up-to-date instructions.
You will need a new-enough driver for your NVIDIA GPU. Please see this version compatibility table for the minimum versions required for different CUDA versions.
To test whether PyTorch can detect your GPU, check that this code snippet prints True
.
python -c 'import torch; print(torch.cuda.is_available())'
pip install wsinfer-mil
Caution
These models are intended only for research purposes.
Jakub Kaczmarzyk has uploaded several pre-trained MIL models to HuggingFace for the community to explore. Over time, I (Jakub) hope that others may contribute MIL models too. If you are interested in this, please feel free to email me at jakub.kaczmarzyk at stonybrookmedicine dot edu.
The models are available at https://huggingface.co/kaczmarj
wsinfer-mil run -m kaczmarj/pancancer-tp53-mut.tcga -i slide.svs
wsinfer-mil run -m kaczmarj/pancancer-tissue-classifier.tcga -i slide.svs
wsinfer-mil run -m kaczmarj/breast-lymph-nodes-metastasis.camelyon16 -i slide.svs
wsinfer-mil run -m kaczmarj/gbmlgg-survival-porpoise.tcga -i slide.svs
wsinfer-mil run -m kaczmarj/kirp-survival-porpoise.tcga -i slide.svs
You can use WSInfer-MIL with a local MIL model. The model must be saved to TorchScript format, and a model configuration file must also be written.
Here is an example of a configuration JSON file:
{
"spec_version": "1.0",
"type": "abmil",
"patch_size_um": 128,
"feature_extractor": "ctranspath",
"num_classes": 2,
"class_names": [
"wildtype",
"mutant"
]
}
There is a JSON schema in wsinfer_mil/schemas/model-config.schema.json
for reference.
Once you have the model in TorchScript format and the configuration JSON file, you can run the model on slides. For example:
wsinfer-mil runlocal -m model.pt -c model.config.json \
-i slides/TCGA-3L-AA1B-01Z-00-DX1.8923A151-A690-40B7-9E5A-FCBEDFC2394F.svs
The pipeline for attention-based MIL methods is rather standardized. Here are the steps that WSInfer-MIL takes. In the future, we would like to incorporate inference using graph-based methods, so this workflow will likely have to be modified.
- Segment the tissue in the image.
- Create patches of the tissue regions.
- Run a feature extractor on these patches.
- Run the pre-trained model on the extracted features.
- Save the results of the extracted features.
WSInfer-MIL caches steps 1, 2, and 3, as those can be reused among MIL models. Step 3 (feature extraction) is often the bottleneck of the workflow, and reusing extracted features can reduce runtime considerably.
Clone and install wsinfer-mil
:
Clone the repository and make a virtual environment for it. Then install the dependencies, with dev
extras.
pip install -e .[dev]
Configure pre-commit
to run the formatter before commits happen.
pre-commit install