๐ Paper โข ๐ Getting Started โข ๐พ Pretrained Models โข ๐ฎ GUI
CoroSAM is a deep learning framework for interactive coronary artery segmentation in coronary angiograms, built on a computationally efficient SAM-based architecture with custom convolutional adapters.
This is the official implementation of the paper published in Computer Methods and Programs in Biomedicine.
- Installation
- Pretrained checkpoints
- ARCADE dataset
- Preprocessing
- Training
- Testing
- Testing on different datasets
- GUI application
- Citation
- Acknowledgments
First, install PyTorch following the official installation guide.
Recommended version: torch==2.6.0+cu124
git clone https://github.com/mife-git/corosam.git
cd corosampip install -r requirements.txtDownload and place in checkpoints/Pretrained/:
| Model | Source | Path |
|---|---|---|
| LiteMedSAM | GitHub | checkpoints/Pretrained/lite_medsam.pth |
| SAMMed2D | GitHub | checkpoints/Pretrained/sam-med2d_b.pth |
Our pretrained CoroSAM model trained on ARCADE is available here:
๐ฅ Download CoroSAM Checkpoint
Save as: checkpoints/CoroSAM/CoroSAM_Final_Training.pt
- Download ARCADE from Zenodo
- Extract to your workspace
- Use only the
syntaxsubset for this project
arcade/
โโโ syntax/
โโโ train/
โโโ val/
โโโ test/
Transform ARCADE COCO annotations into training-ready format.
Edit preprocessing_config.yaml:
dataset_root: "C:/path/to/arcade/syntax"
seed: 2025# Step 1: Convert COCO to binary masks
python preprocessing/convert_coco_to_binary_masks.py
# Step 2: Merge train+val and apply augmentation
python preprocessing/data_augmentation.py
# Step 3: Create k-fold splits
python preprocessing/split_dataset.pyOutput structure:
syntax/
โโโ train/ # Original train set
โโโ val/ # Original val set
โโโ test/ # Test set
โโโ train_all/ # Merged train+val
โ โโโ images/
โ โโโ annotations/
โ โโโ images_augmented/
โ โโโ annotations_augmented/
โโโ kf_split/ # 5-fold cross-validation
โโโ set1/
โโโ set2/
โโโ ...
Train CoroSAM on your data with flexible configurations.
Edit train_config.yaml:
# Dataset
dataset_root: "C:/path/to/arcade/syntax"
k_fold_path: "C:/path/to/arcade/syntax/kf_split"
# Model
model_name: "LiteMedSAM"
exp_name: "CoroSAM_Training"
# Adapters
use_adapters: true
use_conv_adapters: true
channel_reduction: 0.25
# Training
n_folds: 5 # 5-fold CV or set to 1 for single run
epochs: 25
batch_size: 4
lr: 0.0005
# Logging
use_wandb: true
proj_name: "CoroSAM"K-fold cross-validation:
python training/train.py --config train_config.yamlSingle training run:
n_folds: 1
train_path: "C:/path/to/arcade/syntax/train_all"
val_path: "C:/path/to/arcade/syntax/test"Comprehensive evaluation with detailed metrics and visualizations.
Edit test_config.yaml:
# Model
model_name: "LiteMedSAM"
checkpoint: "checkpoints/CoroSAM/CoroSAM_Final_Training.pt"
# Dataset
test_path: "C:/path/to/arcade/syntax/test"
results_path: "results/CoroSAM_ARCADE_Test"
# Options
save_predictions: true # Save visualization imagespython testing/test.py --config test_config.yamlCoroSAM can be evaluated on any custom dataset!
Your dataset must follow the ARCADE preprocessing output structure:
dataset_name/
โโโ test/ (or any folder name)
โโโ images/
โ โโโ dataset_name_1.png
โ โโโ dataset_name_2.png
โ โโโ ...
โโโ annotations/
โโโ dataset_name_1_gt.png
โโโ dataset_name_2_gt.png
โโโ ...
# test_config.yaml
test_path: "path/to/your_dataset/test"
checkpoint: "checkpoints/CoroSAM/corosam_pretrained.pth"python testing/test.py --config test_config.yamlInteractive segmentation with a user-friendly interface.
python gui/gui_corosam.pyIf you find CoroSAM useful in your research, please cite our paper:
@article{corosam2025,
title={CoroSAM: adaptation of the Segment Anything Model for interactive segmentation in Coronary angiograms},
journal={Computer Methods and Programs in Biomedicine},
year={2025},
publisher={Elsevier},
doi={10.1016/j.cmpb.2025.108587},
url={https://www.sciencedirect.com/science/article/pii/S0169260725005887}
}This project builds upon excellent open-source work:
- Segment Anything Model (SAM): facebookresearch/segment-anything
- MedSAM: bowang-lab/MedSAM
- SAM-Med2D: OpenGVLab/SAM-Med2D