diff --git a/README.md b/README.md index 11dca26..48e942b 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,37 @@ [![Docker Version](https://img.shields.io/docker/v/waticlems/slide2vec?sort=semver&label=docker&logo=docker&color=2496ED)](https://hub.docker.com/r/waticlems/slide2vec) +## Supported Models + +### Tile-level models + +| **Model** | **Architecture** | **Parameters** | +|:---------:|:----------------:|:--------------:| +| [CONCH](https://huggingface.co/MahmoodLab/conch) | ViT-B/16 | 86M | +| [H0-mini](https://huggingface.co/bioptimus/H0-mini) | ViT-B/16 | 86M | +| [Hibou-B](https://huggingface.co/histai/hibou-b) | ViT-B/16 | 86M | +| [Hibou-L](https://huggingface.co/histai/hibou-L) | ViT-L/16 | 307M | +| [MUSK](https://huggingface.co/xiangjx/musk) | ViT-L/16 | 307M | +| [Phikon-v2](https://huggingface.co/owkin/phikon-v2) | ViT-L/16 | 307M | +| [UNI](https://huggingface.co/MahmoodLab/UNI) | ViT-L/16 | 307M | +| [Virchow](https://huggingface.co/paige-ai/Virchow) | ViT-H/14 | 632M | +| [Virchow2](https://huggingface.co/paige-ai/Virchow2) | ViT-H/14 | 632M | +| [MidNight12k](https://huggingface.co/kaiko-ai/midnight) | ViT-G/14 | 1.1B | +| [UNI2](https://huggingface.co/MahmoodLab/UNI2-h) | ViT-G/14 | 1.1B | +| [Prov-GigaPath](https://huggingface.co/prov-gigapath/prov-gigapath) | ViT-G/14 | 1.1B | +| [H-optimus-0](https://huggingface.co/bioptimus/H-optimus-0) | ViT-G/14 | 1.1B | +| [H-optimus-1](https://huggingface.co/bioptimus/H-optimus-1) | ViT-G/14 | 1.1B | +| [Kaiko](https://github.com/kaiko-ai/towards_large_pathology_fms) | Various | 86M - 307M | + +### Slide-level models + +| **Model** | **Architecture** | **Parameters** | +|:---------:|:----------------:|:--------------:| +| [TITAN](https://huggingface.co/MahmoodLab/TITAN) | Transformer | 49M | +| [Prov-GigaPath](https://huggingface.co/prov-gigapath/prov-gigapath) | Transformer (LongNet) | 87M | +| [PRISM](https://huggingface.co/paige-ai/PRISM) | Perceiver Resampler | 99M | + + ## 🛠️ Installation System requirements: Linux-based OS (e.g., Ubuntu 22.04) with Python 3.10+ and Docker installed. @@ -23,7 +54,7 @@ Replace `/path/to/your/data` with your local data directory. Alternatively, you can install `slide2vec` via pip: ```shell -pip install slide2vec +pip install slide2vechel ``` ## 🚀 Extract features @@ -43,9 +74,7 @@ pip install slide2vec - for preprocessing options: `slide2vec/configs/default_tiling.yaml` - for model options: `slide2vec/configs/default_model_.yaml` - We've also added default configuration files for each of the foundation models currently supported: - - tile-level: `uni`, `uni2`, `virchow`, `virchow2`, `prov-gigapath`, `h-optimus-0`, `h-optimus-1`, `h0-mini`, `conch`, `musk`, `phikonv2`, `hibou-b`, `hibou-L`, `MidNight12k`, [`kaiko`](https://github.com/kaiko-ai/towards_large_pathology_fms) - - slide-level: `prov-gigapath`, `titan`, `prism` + We've also added default configuration files for each of the foundation models currently supported (see above). 3. Kick off distributed feature extraction