Prediction of outcome of Mechanical Thrombectomy in endovascular stroke patients.
See below for installation instructions.
- Ubuntu 22.04 or above.
- pip: https://pypi.org/project/pip/.
- Python 3.10. See https://www.linuxcapable.com/how-to-install-python-3-10-on-ubuntu-linux/ for instructions on how to install Python 3.10 on Ubuntu 22.04.
- virtualenv 20 or greater.
- To install:
pip install --upgrade virtualenv. - To check your version run:
virtualenv --version.
- To install:
- PyTorch 1.13.1.
- CUDA 11.7.
Install deep-mt in a Python virtual environment to prevent conflicts with other packages.
Clone project.
git clone git@github.com:jdddog/deep-mt.gitEnter the deep-mt folder.
cd deep-mt
Create a virtual environment.
virtualenv -p python3.10 venv
Activate your virtual environment.
source venv/bin/activate
Install the deep-mt package and it's dependencies.
pip install -e . --extra-index-url https://download.pytorch.org/whl/cu117See below for data pre-processing instructions.
Your data folder should end up arranged like this:
- data
- configs: config files.
- experiments: files saved during training, e.g. weights.
- nii
- STKCentreA: NIfTI files for centre A.
- STKCentreB: NIfTI files for centre B.
- thrombectomy-example.csv: sample CSV file with fake data.
Make sure that you copy your CSV data files into the data folder. thrombectomy-example.csv shows how
these file should be structured.
For pre-processing install the required dependencies:
- Install R: https://www.digitalocean.com/community/tutorials/how-to-install-r-on-ubuntu-22-04
- Install R Studio:
- Install Ubuntu dependencies:
sudo apt install build-essential libcurl4-gnutls-dev libxml2-dev libssl-dev - RStudio download location: https://www.rstudio.com/products/rstudio/download/
- Install Ubuntu dependencies:
- Install FSL: https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FslInstallation
- In RStudio, run the command
bin/preprocess/install.R - Install the
deep-skullPython project into a virtual environment: https://github.com/jdddog/deep-skull
Run the command convert a batch of DICOM CT scans to NIfTI:
./bin/preprocess/dcm2nii-batch.sh /path/to/dicoms /path/to/nii ax_CTRun the command convert a batch of DICOM CT angiogram scans to NIfTI:
./bin/preprocess/dcm2nii-batch.sh /path/to/dicoms /path/to/nii ax_ARun the preprocess script from the ./bin/preprocess directory:
cd ./bin/preprocess
./preprocess.sh -t ./templates -n ../data/nii -d ../../deep-skull -x 1.0 -y 1.0 -z 2.0Calculate CSF of the scans:
deep-mt calc-csf ./data/thrombectomy-example.csv ./data/nii/ ./data/thrombectomy-example-csf.csvMerge the brain_volume, csf_volume and csf_ratio columns into your CSV file.
To train a PyTorch model run the following command.
deep-mt train-pytorch ./data/configs/imaging/sex/sex-ct.yamlTo evaluate a batch of weights, run the following command. By default, the valid and test sets is used for evaluation:
deep-mt evaluate-pytorch ./data/configs/imaging/sex/sex-ct.yamlTo evaluate with the train, valid and test datasets, use the --subset option:
deep-mt evaluate-pytorch ./data/configs/imaging/sex/sex-ct.yaml --subset train --subset valid --subset testTo start tensorboard:
tensorboard --logdir runs/To train a scikit-learn model run the following command.
deep-mt train-sklearn ./data/configs/clinical/logistic-regression-mrs02-36.yamlTo run feature selection run the following command. This uses SequentialFeatureSelector with RepeatedStratifiedKFold to select a subset of features and then trains and evaluates two models, one with the subset of features and one with all features. The Sklearn model is saved to the experiment folder and evaluation results saved to CSV.
deep-mt feature-selection ./data/configs/clinical/logistic-regression-mrs02-36.yamlTo visualise the transformed scans, run the following command. By default, all scans from all subsets are visualised.
deep-mt visualise ./data/configs/imaging/sex/sex-ct.yamlTo visualise the transformed scans for a specific subset and to only visualise a maximum number of cases, specify the subset and n-cases option. Multiple subsets can be specified by repeating the --subset option.
deep-mt visualise ./data/configs/imaging/sex/sex-ct.yaml --subset valid --n-cases 5By default, every third slice is visualised, to visualise them all, set --every-n 1.
The visualise command will output the location of the visualisations.
To visualise the salience of a model, run the following command. By default, all scans from all subsets are visualised.
deep-mt visualise-salience ./data/configs/imaging/sex/sex-ct.yaml ./data/experiments/sex-ct/sex-ct_epoch_1.pth --salience-type gradcamTo visualise the salience for a specific subset and to only visualise a maximum number of cases, specify the subset and n-cases option. Multiple subsets can be specified by repeating the --subset option.
deep-mt visualise-salience ./data/configs/imaging/sex/sex-ct.yaml ./data/experiments/sex-ct/sex-ct_epoch_1.pth --salience-type gradcam --subset valid --n-cases 5By default, every third slice is visualised, to visualise them all, set --every-n 1.
The visualise-salience command will output the location of the visualisations.
A couple of commands come in handy to perform data cleaning, when adding new scans to the dataset.
To merge new DICOM scans with an existing dataset, run the merge-scans command.
deep-mt merge-scans /path/to/src/dicoms /path/to/dst/dicomsTo check that DICOM scans are readable, run the check-scans-readable command and then fix any issues.
deep-mt check-scans-readable /path/to/dicomsTo find duplicate DICOM scans, even if the scans have different case ids or series ids, use the find-duplicate-scans command.
deep-mt find-duplicate-scans /path/to/dicomsYou may need to clean the DICOMs with gdcmanon first:
gdcmanon -r --continue --dumb --remove 0008,0020 --remove 0008,0030 -i /path/to/dicoms -o /path/to/outputExample commands to crop scans:
deep-mt crop /path/to/nii/ "^STK(?:CH)?\d+[_]?\d*_ax_CT_0.44x0.44x1.0mm_to_scct_unsmooth_SS_0_0.44x0.44x1.0mm_DenseRigid.nii.gz$" 39 45 52 372 460 110
deep-mt crop /path/to/nii/ "^STK(?:CH)?\d+[_]?\d*_ax_CT_0.44x0.44x1.0mm_to_scct_unsmooth_SS_0_0.44x0.44x1.0mm_DenseRigid_combined_bet.nii.gz$" 39 45 52 372 460 110
deep-mt crop /path/to/nii/ "^STK(?:CH)?\d+[_]?\d*_ax_A_cropped_to_STK(?:CH)?\d+[_]?\d*_ax_CT_0.44x0.44x1.0mm_to_scct_unsmooth_SS_0_0.44x0.44x1.0mm_DenseRigid.nii.gz$" 39 45 52 372 460 110Example commands to resample scans:
deep-mt resample /path/to/nii/ "^STK(?:CH)?\d+[_]?\d*_ax_CT_0.44x0.44x1.0mm_to_scct_unsmooth_SS_0_0.44x0.44x1.0mm_DenseRigid_combined_bet_crop.nii.gz$" --x 0.44 --y 0.44 --z 1.5
deep-mt resample /path/to/nii/ "^STK(?:CH)?\d+[_]?\d*_ax_CT_0.44x0.44x1.0mm_to_scct_unsmooth_SS_0_0.44x0.44x1.0mm_DenseRigid_crop.nii.gz$" --x 0.44 --y 0.44 --z 1.5
deep-mt resample /path/to/nii/ "^STK(?:CH)?\d+[_]?\d*_ax_A_cropped_to_STK(?:CH)?\d+[_]?\d*_ax_CT_0.44x0.44x1.0mm_to_scct_unsmooth_SS_0_0.44x0.44x1.0mm_DenseRigid_crop.nii.gz$" --x 0.44 --y 0.44 --z 1.5To make predictions for a given weights file:
deep-mt predict-pytorch ./data/configs/imaging/mrs/mrs-ct-1.0x1.0x2.0mm-152x182x76px.yaml ./data/experiments/mrs-ct-1.0x1.0x2.0mm-152x182x76px/mrs-ct-1.0x1.0x2.0mm-152x182x76px_epoch_33.pth
deep-mt predict-pytorch ./data/configs/imaging/mrs/mrs-ct-1.0x1.0x2.0mm-152x182x76px-no-basilars.yaml ./data/experiments/mrs-ct-1.0x1.0x2.0mm-152x182x76px-no-basilars/mrs-ct-1.0x1.0x2.0mm-152x182x76px-no-basilars_epoch_59.pthThe Python code is licensed under Apache License Version 2.0.
The bash and R based pre-processing scripts in ./bin/preprocess are licensed the GPLv3 due to dependencies
used in these scripts.