Pyramid Adversarial Training Improves ViT Performance

Charles Herrmann* Kyle Sargent* Lu Jiang Ramin Zabih
Huiwen Chang Ce Liu Dilip Krishnan Deqing Sun
Google Research
| Paper | Code! |

Left: Visual example of a pyramid adversarial image. We show the original image, multiple scales of a perturbation pyramid, and the perturbed image. The perturbation is adversarially learned for different scales and with different weights for each scale. Right: Examples of evaluation datasets and our gains . We show thumbnails of in-distribution and out-of-distribution datasets, and the gains from applying our technique on each dataset. (Note that lower is better for ImageNet-C.)

Abstract

Aggressive data augmentation is a key component of the strong generalization capabilities of Vision Transformer (ViT). One such data augmentation technique is adversarial training (AT); however, many prior works have shown that this often results in poor clean accuracy. In this work, we present pyramid adversarial training (PyramidAT), a simple and effective technique to improve ViT’s overall performance. We pair it with a “matched” Dropout and stochastic depth regularization, which adopts the same Dropout and stochastic depth configuration for the cleanand adversarial samples. Similar to the improvements on CNNs by AdvProp (not directly applicable to ViT), our pyramid adversarial training breaks the trade-off between in-distribution accuracy and out-of-distribution robustness for ViT and related architectures. It leads to 1.82% absolute improvement on ImageNet clean accuracy for the ViT-B model when trained only on ImageNet-1K data, while simultaneously boosting performance on 7 ImageNet robustness metrics, by absolute numbers ranging from 1.76% to 15.68%. We set a new state-of-the-art for ImageNet-C (41.42 mCE), ImageNet-R (53.92%), and ImageNet-Sketch (41.04%) without extra data, using only the ViT-B/16 backbone and our pyramid adversarial training. Our code will be publicly available.

 

Papers

 

"Pyramid Adversarial Training Improves ViT Performance"
Charles Herrmann*, Kyle Sargent*, Lu Jiang, Ramin Zabih, Huiwen Chang, Ce Liu, Dilip Krishnan, and Deqing Sun
Oral presentation

CVPR 2022

[Arxiv][CVF]

Corresponding author: irwinherrmann at google dot com; deqingsun at google dot com

Code

Bibtex

  @inproceedings{herrmann2022pyramid,
    title={Pyramid Adversarial Training Improves ViT Performance},
    author={Herrmann, Charles and Sargent, Kyle and Jiang, Lu and Zabih, Ramin and Chang, Huiwen and Liu, Ce and Krishnan, Dilip and Sun, Deqing},
      booktitle={CVPR},
    year={2022}
  }