-
Notifications
You must be signed in to change notification settings - Fork 17
Open
Description
Hello,
Thank you for sharing this interesting work, I use a custom dataset of RGB images with size 224*224 saved in the 5 label names from 0 to 4 in train/images/0..4 folders and no validation folder and in the training phase. I use the predefined resnet50 architecture + the below parameters
train_kwargs = {
'out_dir': "./train_out",
'adv_train': 1,
'constraint': '2',
'eps': 0.05,
'attack_lr': 1.5,
'attack_steps': 10,
'epochs': 10,
'log_iters':5,
'lr':0.001,
'momentum':0.9,
'weight_decay':1e-3,
'use_best': True,
'random_restarts': 0,
'save_ckpt_iters':-1
}
train.train_model(train_args, model, (train_loader, val_loader), store=out_store)
Then it generates some .pt model files that I used the best version of it for the test phase
# Load model
model_kwargs = {
'arch': 'resnet50',
'dataset': ds,
'resume_path' : './train_out/4bce8667-bc86-4776-aa1d-1489eacda01f/checkpoint.pt.best'
}
model, _ = model_utils.make_and_restore_model(**model_kwargs)
model.eval()
pass
BATCH_SIZE = 32
NUM_WORKERS = 2
_, test_loader = ds.make_loaders(workers=NUM_WORKERS,
batch_size=BATCH_SIZE)
# PGD Parameters
kwargs = {
#'criterion': ch.nn.CrossEntropyLoss(),
'custom_loss': activation_loss,
'constraint':'2',
'eps': 50,
'step_size': 0.5,
'iterations': 200,
'do_tqdm': True,
'targeted': True,
}
Then implemented the remained cells of maximizing_inputs notebook but the output representation is completely noisy and the model does not sort the 5 top images based on their labels. Could you please help me with this issue?
Any comments would be appreciated
Metadata
Metadata
Assignees
Labels
No labels