Skip to content

rohban-lab/PatchGuard

Repository files navigation

PatchGuard: Adversarially Robust Anomaly Detection and Localization through Vision Transformers and Pseudo Anomalies

Official PyTorch implementation of "PatchGuard: Adversarially Robust Anomaly Detection and Localization through Vision Transformers and Pseudo Anomalies" CVPR 2025.

Authors: Mojtaba Nafez, Amirhossein Koochakian, Arad Maleki, Jafar Habibi, Mohammad Hossein Rohban
Links: ArXiv, Proceddings


🔍 Introduction

Anomaly Detection (AD) and Anomaly Localization (AL) are crucial in fields that demand high reliability, such as medical imaging and industrial monitoring. However, current AD and AL approaches are often susceptible to adversarial attacks due to limitations in training data, which typically include only normal, unlabeled samples. This study introduces PatchGuard, an adversarially robust AD and AL method that incorporates pseudo anomalies with localization masks within a Vision Transformer (ViT)-based architecture to address these vulnerabilities.We begin by examining the essential properties of pseudo anomalies, and follow it by providing theoretical insights into the attention mechanisms required to enhance the adversarial robustness of AD and AL systems. We then present our approach, which leverages Foreground-Aware Pseudo-Anomalies to overcome the deficiencies of previous anomaly-aware methods. Our method incorporates these crafted pseudo-anomaly samples into a ViT-based framework, with adversarial training guided by a novel loss function designed to improve model robustness, as supported by our theoretical analysis.Experimental results on well-established industrial and medical datasets demonstrate that PatchGuard significantly outperforms previous methods in adversarial settings, achieving performance gains of 53.2% in AD and 68.5% in AL, while also maintaining competitive accuracy in non-adversarial settings.


⚡ Colab Notebook

An interactive Colab notebook is provided for quick experimentation with PatchGuard.


🛠️ Setup

Clone the Project

git clone https://github.com/rohban-lab/PatchGuard.git
cd PatchGuard

Download Datasets and Foreground Masks

To prepare the datasets for training and evaluation, simply run the following command:

python download_data.py --dataset <DATASET_NAME>

The mask directory, named foreground_mask, will be placed alongside the training images folder. For example:

datasets/MVTec/toothbrush/train/
                            ├── good/
                            └── foreground_mask/

Download Pretrained Weights

Weights for a specific dataset and class can be downloaded by running:

python download_weight.py --dataset <DATASET_NAME> --class_name <CLASS_NAME> --checkpoint_dir <SAVE_DIR>

Note: Dataset and weight preparation can be automatically triggered by adding the --use_data_prep and --use_weight_prep flags to the training or evaluation commands.


🧪 Training

To train the model from scratch:

python main.py --mode train --class_name <CLASS_NAME> --dataset <DATASET_NAME> --dataset_dir <DATASET_DIR> --epochs <NUM_EPOCHS>

📈 Evaluation

To evaluate a trained model:

python main.py --mode test --class_name <CLASS_NAME> --dataset <DATASET_NAME> --dataset_dir <DATASET_DIR> --step_test <NUM_ADV_STEPS> --epsilon_test <ADV_EPSILONS_SEQ> --checkpoint_dir <WEIGHT_DIR>

🖼️ Visualization

To visualize localization results:

python main.py --mode visualization --class_name <CLASS_NAME> --dataset <DATASET_NAME> --dataset_dir <DATASET_DIR> --epsilon_visualization <ADV_EPSILON> --step_visualization <NUM_ADV_STEPS> --checkpoint_dir <WEIGHT_DIR>

📚 Datasets

Dataset Official Page Foreground Mask Model Weights
MvTec AD Link Link Link
VisA Link Link Link
MPDD Link Link -
BTAD Link Link -
WFDD Link - -
DTD-Synthetic Link - -
BraTS2021 Link Link -
HeadCT Link Link -

Notes:

  • Links to datasets, masks, and pretrained model weights will be updated here.
  • WFDD and DTD-Synthetic are texture datasets in which the entire image area is considered foreground.

📚 Citation

If you find this paper and repository helpful in your research, please cite us:

@InProceedings{Nafez_2025_CVPR,
    author    = {Nafez, Mojtaba and Koochakian, Amirhossein and Maleki, Arad and Habibi, Jafar and Rohban, Mohammad Hossein},
    title     = {PatchGuard: Adversarially Robust Anomaly Detection and Localization through Vision Transformers and Pseudo Anomalies},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
    month     = {June},
    year      = {2025},
    pages     = {20383-20394}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •