Prototype-Based Pseudo-Label Denoising for Source-Free Domain Adaptation in Remote Sensing Semantic Segmentation
- [2026/1/17] 🎉🎉 Our paper is accepted to ICASSP'26.
- [2025/9/21] 😀😀 The [arxiv] paper is available.
- [2025/9/21] ✨✨ The
README.mdhas been updated. - [2025/9/21] 🤓🤓 The [arxiv] paper has been submitted.
- [2025/9/16] ✨✨ The [arxiv] paper is coming soon.
- [2025/9/15] 🔥🔥 This work was submitted.
- ☑️ submit to arxiv
- ❎ upload training code
- ❎ upload ProSFDA model weights
Install script
pip install torch==1.10.2+cu111 -f https://mirror.sjtu.edu.cn/pytorch-wheels/cu111/?mirror_intel_list
pip install torchvision==0.11.3+cu111 -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full==1.5.0 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html
pip install kornia matplotlib prettytable timm yapf==0.40.1for CN user:
pip install torch==1.10.2+cu111 -f https://mirror.sjtu.edu.cn/pytorch-wheels/cu111/?mirror_intel_list
pip install torchvision==0.11.3+cu111 -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full==1.5.0 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html
pip install kornia matplotlib prettytable timm yapf==0.40.1Installation of the reference document refer:
Torch and torchvision versions relationship.
Version relationship of mmcv and torch.
We selected Postsdam, Vaihingen and LoveDA as benchmark datasets and created train, val, test lists for researchers.
Potsdam download
The Potsdam dataset is for urban semantic segmentation used in the 2D Semantic Labeling Contest - Potsdam.
The dataset can be requested at the challenge homepage. The '2_Ortho_RGB.zip', '3_Ortho_IRRG.zip' and '5_Labels_all_noBoundary.zip' are required.
Vaihingen download
The Vaihingen dataset is for urban semantic segmentation used in the 2D Semantic Labeling Contest - Vaihingen.
The dataset can be requested at the challenge homepage. The 'ISPRS_semantic_labeling_Vaihingen.zip' and 'ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip' are required.
Place the downloaded file in the corresponding path The format is as follows:
detals
ProSFDA/
├── data/
├── ├── Potsdam_IRRG_DA/
│ │ ├── 3_Ortho_IRRG.zip
│ │ └── 5_Labels_all_noBoundary.zip
├── ├── Vaihingen_IRRG_DA/
│ │ ├── ISPRS_semantic_labeling_Vaihingen.zip
│ │ └── ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip
after that we can convert dataset:
dataset convert
- Potsdam
python tools/convert_datasets/potsdam.py data/Potsdam_IRRG/ --clip_size 512 --stride_size 512
python tools/convert_datasets/potsdam.py data/Potsdam_RGB/ --clip_size 512 --stride_size 512- Vaihingen
python tools/convert_datasets/vaihingen.py data/Vaihingen_IRRG/ --clip_size 512 --stride_size 256mit_b5.pth :
We provide a script mit2mmseg.py in the tools directory to convert the key of models from the official repo to MMSegmentation style.
model convert
python tools/model_converters/mit2mmseg.py ${PRETRAIN_PATH} ./pretrainedOr you can download it from google drive.
The structure of the file is as follows
structure
ProSFDA/
├── pretrained/
│ ├── mit_b5.pth (needed)
│ └── ohter.pth (optional)