MultiAttack(attacks, device=None, verbose=False)¶
MultiAttack is a class to attack a model with various attacks agains same images and labels.
- model (nn.Module) – model to attack.
- attacks (list) – list of attacks.
>>> atk1 = torchattacks.PGD(model, eps=8/255, alpha=2/255, iters=40, random_start=True) >>> atk2 = torchattacks.PGD(model, eps=8/255, alpha=2/255, iters=40, random_start=True) >>> atk = torchattacks.MultiAttack([atk1, atk2]) >>> adv_images = attack(images, labels)
save(data_loader, save_path=None, verbose=True, return_verbose=False, save_predictions=False, save_clean_images=False)¶
LGV(model, trainloader, lr=0.05, epochs=10, nb_models_epoch=4, wd=0.0001, n_grad=1, verbose=True, attack_class=<class 'torchattacks.attacks.bim.BIM'>, **kwargs)¶
LGV attack in the paper ‘LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity’ [https://arxiv.org/abs/2207.13129]
- model (nn.Module) – initial model to attack.
- trainloader (torch.utils.data.DataLoader) – data loader of the unnormalized train set. Must load data in [0, 1].
- aware that the batch size may impact success rate. The original paper uses a batch size of 256. A different (Be) –
- might require to tune the learning rate. (batch-size) –
- lr (float) – constant learning rate to collect models. In the paper, 0.05 is best for ResNet-50. 0.1 seems best
- some other architectures. (Default (for) – 0.05)
- epochs (int) – number of epochs. (Default: 10)
- nb_models_epoch (int) – number of models to collect per epoch. (Default: 4)
- wd (float) – weight decay of SGD to collect models. (Default: 1e-4)
- n_grad (int) – number of models to ensemble at each attack iteration. 1 (default) is recommended for efficient
- attacks. Higher numbers give generally better results at the expense of computations. -1 uses all (iterative) –
- models (should be used for single-step attacks like FGSM) –
- verbose (bool) – print progress. Install the tqdm package for better print. (Default: True)
If a list of models is not provided to load_models(), the attack will start by collecting models along
the SGD trajectory for epochs epochs with the constant learning rate lr.
- images: \((N, C, H, W)\) where N = number of batches, C = number of channels, H = height
and W = width. It must have a range [0, 1]. - labels: \((N)\) where each value \(y_i\) is \(0 \leq y_i \leq\) number of labels. - output: \((N, C, H, W)\).
>>> attack = torchattacks.LGV(model, trainloader, lr=0.05, epochs=10, nb_models_epoch=4, wd=1e-4, n_grad=1, attack_class=BIM, eps=4/255, alpha=4/255/10, steps=50, verbose=True) >>> attack.collect_models() >>> attack.save_models('./models/lgv/') >>> adv_images = attack(images, labels)
Collect LGV models along the SGD trajectory
Load collected models
Arguments: list_models (list of nn.Module): list of LGV models.
Save collected models to the path directory
Arguments: path (str): directory where to save models.
LightEnsemble(list_models, order='shuffle', n_grad=1)¶