This repository contains the material from the paper "IM-Fuse: A Mamba-based Fusion Block for Brain tumor Segmentation with Incomplete Modalities".It includes all materials necessary to reproduce our framework, as well as the competitors evaluated on the BraTS 2023 dataset for the glioma segmentation task.
Brain tumor segmentation is a crucial task in medical imaging that involves the integrated modeling of four distinct imaging modalities to accurately delineate tumor regions. Unfortunately, in real-life scenarios, the complete acquisition of all four modalities is frequently hindered by factors such as scanning costs, time constraints, and patient condition. To address this challenge, numerous deep learning models have been developed to perform brain tumor segmentation under conditions of missing imaging modalities.
Despite these advancements, the majority of existing models have been evaluated primarily on the 2018 edition of the BraTS dataset, which comprises only
Moreover, we introduce and evaluate the use of Mamba as an alternative fusion mechanism for brain tumor segmentation in scenarios involving missing modalities. Experimental results indicate that transformer-based architectures achieve superior performance on the BraTS 2023 dataset, outperforming purely convolutional models that previously demonstrated state-of-the-art results on BraTS2018. Notably, the proposed Mamba-based architecture exhibits promising performance compared to state-of-the-art models, competing and even outperforming transformers.
If you use this code or paper in your research, you must cite: [Bib]
Before running this project, you need to download the data from BraTS 2023 Challenge, specifically the subset for Glioma Segmentation task.
Clone this repository, create a python env for the project and activate it. Then install all the dependencies with pip.
git clone git@github.com:AImageLab-zip/IM-Fuse.git
cd IMFuse
python -m venv imfuse_venv
source imfuse_venv/bin/activate
pip install -r requirements.txt
Run python preprocess.py with the following arguments:
python preprocess.py \
--input-path <INPUT_PATH> # Directory containing the unprocessed BRATS2023 files
--output-path <OUTPUT_PATH> # Destination directory for the preprocessed dataset
Run the training script train_poly.py with the following arguments:
python train_poly.py \
--datapath <PATH>/BRATS2023_Training_npy \ # Directory containing BRATS2023 .npy files
--num_epochs 1000 \ # Total number of training epochs
--dataname BRATS2023 \ # Dataset identifier
--savepath <OUTPUT_PATH> \ # Directory for saving checkpoints
--mamba_skip \ # Using Mamba in the skip connections
--interleaved_tokenization # Enable interleaved tokenization
Run the test script test.py with the following arguments:
python test.py \
--datapath <PATH>/BRATS2023_Training_npy \ # Directory containing BRATS2023 .npy files
--dataname BRATS2023 \ # Dataset identifier
--savepath <OUTPUT_PATH> \ # Directory for saving results
--resume <RESUME_PATH> \ # Path to the checkpoints
--mamba_skip \ # Using Mamba in the skip connections
--batch_size 2 \ # Batch size
--interleaved_tokenization # Enable interleaved tokenization
We provide implementations for evaluating the primary competitor models on the BraTS 2023 dataset. Please consult the respective README files for detailed instructions on installation, usage, and reproduction of results.
- Missing as Masking: Arbitrary Cross-modal Feature Reconstruction for Incomplete Multimodal Brain Tumor Segmentation
- M3AE: Multimodal Representation Learning for Brain Tumor Segmentation with Missing Modalities
- Multi-modal Learning with Missing Modality via Shared-Specific Feature Modelling
- SFusion: Self-attention based N-to-One Multimodal Fusion Block
- mmformer: Multimodal medical transformer for incomplete multimodal learning of brain tumor segmentation
- Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation
- Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement and Gated Fusion
