Adversarial Robustness Against Image Color Transformation within Parametric Filter Space

11/12/2020
by   Zhengyu Zhao, et al.
7

We propose Adversarial Color Enhancement (ACE), a novel approach to generating non-suspicious adversarial images by optimizing a color transformation within a parametric filter space. The filter we use approximates human-understandable color curve adjustment, constraining ACE with a single, continuous function. This property gives rise to a principled adversarial action space explicitly controlled by filter parameters. Existing color transformation attacks are not guided by a parametric space, and, consequently, additional pixel-related constraints such as regularization and sampling are necessary. These constraints make methodical analysis difficult. In this paper, we carry out a systematic robustness analysis of ACE from both the attack and defense perspectives by varying the bound of the color filter parameters. We investigate a general formulation of ACE and also a variant targeting particularly appealing color styles, as achieved with popular image filters. From the attack perspective, we provide extensive experiments on the vulnerability of image classifiers, but also explore the vulnerability of segmentation and aesthetic quality assessment algorithms, in both the white-box and black-box scenarios. From the defense perspective, more experiments provide insight into the stability of ACE against input transformation-based defenses and show the potential of adversarial training for improving model robustness against ACE.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset