This is a python package for deep learning ehancements to synthetic Magnetic Resonance Imaging (MRI) via a Deep Image Prior, that is especially effective in noisier situations. The repository provides the code for the implementation in the paper "Personalized synthetic MR imaging with deep learning enhancements" by Pal, Dutta and Maitra, Magnetic Resonance in Medicine, 2022.
The main workflow is:
The training images in 3 (or more) settings is DL enhanced first using the fucntion DL_smooth_3. Then LS estimate of the underlying parametric maps (W, reparametrized rho, T1, T2) is calculated using the function LS_est_par. And then predict_image_par uses the W to predict MR images in new settings.
An complete example is provided with example in the file SE_example_DL.py.
The dataset is in the folder data with three sub-folders, one sub-folder with the mask, one sub-folder with the three training volumetric images, and one sub-folder with nine test volumetric images.
The environment is created by conda with commands:
conda create --name DeepSynMRI
conda activate DeepSynMRI
conda install numpy
conda install pytorch torchvision torchaudio cpuonly -c pytorch
conda install matplotlib
conda install scikit-image
conda install nibabel -c conda-forge
conda install joblib
conda install pandasFor GPU the pytorch line should be replaced by conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
The environment should be first activated by something like
conda activate DeepSynMRIAll the images are vectorized and the training images are positioned in 0th, 8th and 9th position of a whole training and testing matrix named image_vec. From this matrix, we do subsequent calculations. All the images are scaled so that the maximmmum value is ~400.
The three main functions for a bare minimum of the whole process are:
-
The main DL fuinction used here is,
DL_smooth_3(train, n_x, n_y, n_z, iter_num). Heretrainis the training data of size(n, 3)wheren = n_x * n_y * n_zandn_x, n_y, n_zare the size of each 3D original MR images, anditer_numis the iteration number of the DL method for each image. This can be imported asfrom DL import DL_smooth_3. (This function can be also used to smooth theWimages after they are estimated instead of smoothing the training images first.) -
LS_est_par(TE_train, TR_train, train, TE_scale, TR_scale, mask_vec, flip_angle)is the function to do the least square estimate ofW, i.e., the paramteric maps of rho, T1, T2 (reparametrized, see the paper), which is the output of sizen x m, where m is the number of training images (usuallym = 3).TE_trainandTR_trainare the TE and TR values of the corresponnding training images.mask_vecis the mask vector of size(n, 1)where 1 means the voxel is masked.flip_angleis the flip angle in degrees, which is 90 for Spin Echo acquisitions(default).TE_scaleandTR_scaleshould be1if the TE and TR values are not scaled from the original values1. -
predict_image_par(W, TE_test, TR_test, flip_angle)is the function to get the predicted synthetic image from the paramteric mapsWwithTE_testandTR_testas the TE and TR values for the test settings andflip_angleas the flip angle of the test images.
Last two functions can be called as:
from estimate.Bloch import *
from estimate.LS import *Other functions include:
MLE_est_par(W_init, TE_vec, TR_vec, train_mat, TE_scale, TR_scale, sigma_train, mask)which calculates the MLE for theWaccording to Rice distribution instead of using Least Square method (which assumes simplified Normal distribution).W_initis a initial value ofWprovided for this process.sigma_trainis an estimate of the standard deviation parameters, usually estimated using some other means.DL_single(train, n_x, n_y, n_z, iter_num)does a similar thing asDL_smooth_3but with one image at a time.DL_smooth_m(train, n_x, n_y, n_z, m, iter_num)does a similar thing asDL_smooth_3but withmimage at a time. This can be used to smooth the images after they are predicted.
Most of the functions are paralleized.
The structure of the whole package is as follows:
.
|-- Check
| |-- Check.py
| |-- Check.R
| |-- compare_rho_etc.R
| |-- SE_real.py
| `-- test.py
|-- data
| |-- mask
| | `-- subject47_crisp_v.mnc.gz
| |-- noise-1-INU-00
| | |-- brainweb_0.mnc.gz
| | |-- brainweb_1.mnc.gz
| | `-- brainweb_2.mnc.gz
| `-- test-noise-0-INU-00
| |-- brainweb_0.mnc.gz
| |-- brainweb_1.mnc.gz
| |-- brainweb_2.mnc.gz
| |-- brainweb_3.mnc.gz
| |-- brainweb_4.mnc.gz
| |-- brainweb_5.mnc.gz
| |-- brainweb_6.mnc.gz
| |-- brainweb_7.mnc.gz
| `-- brainweb_8.mnc.gz
|-- DL.py
|-- estimate
| |-- Bloch.py
| |-- LS.py
| `-- MLE.py
|-- intermed
|-- LICENSE
|-- models
| |-- common.py
| |-- downsampler.py
| |-- __init__.py
| `-- skip.py
|-- pred_DL.py
|-- README.md
|-- SE_example_DL.py
`-- utils
|-- common_utils.py
|-- denoising_utils.py
`-- __init__.py
If you think you have cuda enabled GPU which is large enough so that it can accomodate the large 3D network, you can change the lines as follows:
dtype = torch.FloatTensor to dtype = torch.cuda.FloatTensor and
net_param.data.copy_(new_param.cpu()) to net_param.data.copy_(new_param.cuda())
in the DL.py file.
You can create a citation file from Cite this repository at the right side of the online github repository. Or you can copy from the following.
Pal, S., Dutta, S., & Maitra, R. Personalized synthetic MR imaging with deep learning enhancements. Magnetic Resonance in Medicine. https://doi.org/10.1002/mrm.29527
@article{paletal23,
author = {Pal, Subrata and Dutta, Somak and Maitra, Ranjan},
title = {Personalized synthetic MR imaging with deep learning enhancements},
journal = {Magnetic Resonance in Medicine},
volume = {89},
number = {4},
pages = {1634-1643},
keywords = {Bloch transform, deep-image-prior, deep-learning, denoising, synthetic MRI},
doi = {https://doi.org/10.1002/mrm.29527},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.29527},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1002/mrm.29527},
abstract = {Purpose Personalized synthetic MRI (syn-MRI) uses MR images of an individual subject acquired at a few design parameters (echo time, repetition time, flip angle) to obtain underlying parametric (ρ,T1,T2)\$\$ \left(\rho, {\mathrm{T}}\_1,{\mathrm{T}}\_2\right) \$\$ maps, from where MR images of that individual at other design parameter settings are synthesized. However, classical methods that use least-squares (LS) or maximum likelihood estimators (MLE) are unsatisfactory at higher noise levels because the underlying inverse problem is ill-posed. This article provides a pipeline to enhance the synthesis of such images in three-dimensional (3D) using a deep learning (DL) neural network architecture for spatial regularization in a personalized setting where having more than a few training images is impractical. Methods Our DL enhancements employ a Deep Image Prior (DIP) with a U-net type denoising architecture that includes situations with minimal training data, such as personalized syn-MRI. We provide a general workflow for syn-MRI from three or more training images. Our workflow, called DIPsyn-MRI, uses DIP to enhance training images, then obtains parametric images using LS or MLE before synthesizing images at desired design parameter settings. DIPsyn-MRI is implemented in our publicly available Python package DeepSynMRI available at: https://github.com/StatPal/DeepSynMRI. Results We demonstrate feasibility and improved performance of DIPsyn-MRI on 3D datasets acquired using the Brainweb interface for spin-echo and FLASH imaging sequences, at different noise levels. Our DL enhancements improve syn-MRI in the presence of different intensity nonuniformity levels of the magnetic field, for all but very low noise levels. Conclusion This article provides recipes and software to realistically facilitate DL-enhanced personalized syn-MRI.},
year = {2023}
}
Footnotes
-
(These are the scale by which all the original TE and TR values are multiplied first. This was done to mitigate some numerical problems for calculation of the gradient and hessian corresponding to the Bloch equation, so that the minimum TE/TR values become 2.01. However, for this current implementation, this scaling is not needed.) ↩