The official implementation of DisCoPatch in PyTorch.
You will need:
python(seepyproject.tomlfor full version)GitMake- a
.secretsfile with the required secrets and credentials - load environment variables from
.env CUDA >= 12.4
Clone this repository (requires git ssh keys)
git clone --recursive git@github.com:caetas/DisCoPatch.git
cd discopatch
Install dependencies
conda env create -f environment.yml
conda activate python3.11
or if environment already exists
conda activate python3.11
Setup the virtualenv using make file recipe
(python3.11) $ make setup-all
You might be required to run the following command once to setup the automatic activation of the conda environment and the virtualenv:
direnv allow
Feel free to edit the .envrc file if you prefer to activate the environments manually.
You can setup the virtualenv by running the following commands:
python -m venv .venv-dev
.venv-dev/Scripts/Activate.ps1
python -m pip install --upgrade pip setuptools
python -m pip install -r requirements/requirements.txt
To run the code please remember to always activate both environments:
conda activate python3.11
.venv/Scripts/Activate.ps1
The evaluation of these models closely follows OpenOOD's benchmark. Three types of OOD levels are defined: Near-OOD, which exhibits semantic shifts compared to ID datasets; Far-OOD, which encompasses both semantic and domain shifts; Covariate Shift OOD, which involves corruptions within the ID set. There are also four well-defined ID datasets:
- ImageNet-1K
- Near-OOD: SSB-hard, NINCO
- Far-OOD: iNaturalist, DTD, OpenImage-O
- Covariate Shift OOD: ImageNet(-C)
ImageNet-1k is automatically downloaded from HuggingFace when you use the DataLoader.
The remaining datasets can be downloaded using datasets_download.py by running the following commands:
cd src/discopatch
python datasets_download.py [--imagenet]
Note: Use the --imagenet flag if you want to download ImageNet-C.
- DisCoPatch
Code|Train Script|Eval Script|Documentation
The commands required to train and evaluate each of the models are provided in the documentation section: DisCoPatch.md
You can download the pre-trained DisCoPatch checkpoint using this link.
| OOD Shift | Dataset | AUROC | FPR@95 |
|---|---|---|---|
| Near-OOD | SSB-hard NINCO |
95.8% 94.3% |
19.8% 39.0% |
| Far-OOD | iNaturalist DTD OpenImage-O |
99.1% 96.4% 94.4% |
3.6% 18.9% 29.7% |
| Covariate Shift | ImageNet-1K(-C) | 97.2% | 10.6% |
The code examples are setup to use Weights & Biases as a tool to track your training runs. Please refer to the full documentation if required or follow the following steps:
-
Create an account in Weights & Biases
-
If you have installed the requirements you can skip this step. If not, activate the conda environment and the virtualenv and run:
pip install wandb
-
Run the following command and insert you
API keywhen prompted:wandb login
See the Developer guidelines for more information.
Contributions of any kind are welcome. Please read CONTRIBUTING.md for details and the process for submitting pull requests to us.
This project is licensed under the terms of the MIT license.
See LICENSE for more details.
All the repositories used to generate this code are mentioned in each of the corresponding files. We would like to list them in no particular order:
If you publish work that uses DisCoPatch, please cite DisCoPatch.
BibTex information will be added later
