Official repository for AdverX-Ray, powered by the GenerativeZoo.
This work is part of the Xecs TASTI project, nr. 2022005.
You will need:
python(seepyproject.tomlfor full version)GitMake- a
.secretsfile with the required secrets and credentials - load environment variables from
.env CUDA >= 12.1Weights & Biasesaccount
Clone this repository (requires git ssh keys)
git clone --recursive git@github.com:caetas/AdverX.git
cd adverx
Install conda environment
conda env create -f environment.yml
conda activate python3.10
And then setup all virtualenv using make file recipe
(python3.10) $ make setup-all
You might be required to run the following command once to setup the automatic activation of the conda environment and the virtualenv:
direnv allow
Feel free to edit the .envrc file if you prefer to activate the environments manually.
You can setup the virtualenv by running the following commands:
python -m venv .venv-dev
.venv-dev/Scripts/Activate.ps1
python -m pip install --upgrade pip setuptools
python -m pip install -r requirements/requirements.txt
To run the code please remember to always activate both environments:
conda activate python3.10
.venv-dev/Scripts/Activate.ps1
The code examples are setup to use Weights & Biases as a tool to track your training runs. Please refer to the full documentation if required or follow the following steps:
-
Create an account in Weights & Biases
-
If you have installed the requirements you can skip this step. If not, activate the conda environment and the virtualenv and run:
pip install wandb
-
Run the following command and insert you
API keywhen prompted:wandb login
AdverX utilizes the first iteration of the BIMCV-COVID-19+ dataset, that can be downloaded here. However, the dataset needs to be re-organized to a format that is compatible with the analysis performed in this work. Please follow these steps.
-
Download the first iteration of the dataset (~70 GB) available
here. -
Unzip the folder and move it to
data/raw. -
Untar all the
tar.gzfiles in the folder. -
Run the script available in
src/adverx:python preprocess_dataset.py
-
You can delete the original dataset folder in
data/raw.
The checkpoints can be downloaded here and should be unzipped and moved to models.
Please run the following commands to train the models in a specific machine.
python train.py --hidden_dims 64 128 256 512 512 --latent_dim 1024 --n_epochs 200 --lr 3e-4 --patches_image 32 --in_machine siemens
python train.py --hidden_dims 64 128 256 512 512 --latent_dim 1024 --n_epochs 200 --lr 3e-4 --patches_image 36 --in_machine konica
python train.py --hidden_dims 64 128 256 512 512 --latent_dim 1024 --n_epochs 200 --lr 3e-4 --patches_image 40 --in_machine philips
python train.py --hidden_dims 64 128 256 512 512 --latent_dim 1024 --n_epochs 200 --lr 3e-4 --patches_image 42 --in_machine gmm
python train.py --hidden_dims 64 128 256 512 512 --latent_dim 1024 --n_epochs 200 --lr 3e-4 --patches_image 42 --in_machine ge
python train_GLOW.py --L 3 --K 32 --hidden_channels 512 --n_bits 16 --lr 1.5e-4 --n_epochs 100 --learn_top --patches_image 32 --in_machine siemens
python train_GLOW.py --L 3 --K 32 --hidden_channels 512 --n_bits 16 --lr 1.5e-4 --n_epochs 100 --learn_top --patches_image 36 --in_machine konica
python train_GLOW.py --L 3 --K 32 --hidden_channels 512 --n_bits 16 --lr 1.5e-4 --n_epochs 100 --learn_top --patches_image 40 --in_machine philips
python train_GLOW.py --L 3 --K 32 --hidden_channels 512 --n_bits 16 --lr 1.5e-4 --n_epochs 100 --learn_top --patches_image 42 --in_machine gmm
python train_GLOW.py --L 3 --K 32 --hidden_channels 512 --n_bits 16 --lr 1.5e-4 --n_epochs 100 --learn_top --patches_image 42 --in_machine ge
python train_VAE.py --hidden_dims 64 128 256 512 512 --latent_dim 1024 --n_epochs 200 --lr 3e-4 --patches_image 32 --in_machine siemens
python train_VAE.py --hidden_dims 64 128 256 512 512 --latent_dim 1024 --n_epochs 200 --lr 3e-4 --patches_image 36 --in_machine konica
python train_VAE.py --hidden_dims 64 128 256 512 512 --latent_dim 1024 --n_epochs 200 --lr 3e-4 --patches_image 40 --in_machine philips
python train_VAE.py --hidden_dims 64 128 256 512 512 --latent_dim 1024 --n_epochs 200 --lr 3e-4 --patches_image 42 --in_machine gmm
python train_VAE.py --hidden_dims 64 128 256 512 512 --latent_dim 1024 --n_epochs 200 --lr 3e-4 --patches_image 42 --in_machine ge
We present an example for the Philips machine only, but the process is similar for other machines.
python eval.py --hidden_dims 64 128 256 512 512 --latent_dim 1024 --in_machine philips --discriminator_checkpoint ./../../models/AdverX/Discriminator_philips.pt
python eval_GLOW.py --L 3 --K 32 --hidden_channels 512 --n_bits 16 --in_machine philips --checkpoint ./../../models/Glow/Glow_32_3_512.pt
python eval_VAE.py --hidden_dims 64 128 256 512 512 --latent_dim 1024 --in_machine philips --checkpoint ./../../models/VAE/VAE_philips.pt
Full documentation is available here: docs/.
Thank you for improving the security of the project, please see the Security Policy for more information.
This project is licensed under the terms of the CC-BY-4.0 license.
See LICENSE for more details.
This field will be added soon.
