[PDF][PDF] On the Sensitivity of Adversarial Robustness to Input Data Distributions.

GW Ding, KYC Lui, X Jin, L Wang, R Huang - ICLR (poster), 2019 - openaccess.thecvf.com
Neural networks are vulnerable to small adversarial perturbations. While existing literature
largely focused on the vulnerability of learned models, we demonstrate an intriguing
phenomenon that adversarial robustness, unlike clean accuracy, is sensitive to the input
data distribution. Even a semantics-preserving transformations on the input data distribution
can cause a significantly different robustness for the adversarially trained model that is both
trained and evaluated on the new distribution. We show this by constructing …

On the sensitivity of adversarial robustness to input data distributions

G Weiguang Ding, K Yik Chau Lui… - Proceedings of the …, 2019 - openaccess.thecvf.com
Neural networks are vulnerable to small adversarial perturbations. While existing literature
largely focused on the vulnerability of learned models, we demonstrate an intriguing
phenomenon that adversarial robustness, unlike clean accuracy, is sensitive to the input
data distribution. Even a semantics-preserving transformations on the input data distribution
can cause a significantly different robustness for the adversarially trained model that is both
trained and evaluated on the new distribution. We show this by constructing semantically …
Showing the best results for this search. See all results