Policy Smoothing for Provably Robust Reinforcement Learning

06/21/2021
by   Aounon Kumar, et al.
0

The study of provable adversarial robustness for deep neural network (DNN) models has mainly focused on static supervised learning tasks such as image classification. However, DNNs have been used extensively in real-world adaptive tasks such as reinforcement learning (RL), making RL systems vulnerable to adversarial attacks. The key challenge in adversarial RL is that the attacker can adapt itself to the defense strategy used by the agent in previous time-steps to strengthen its attack in future steps. In this work, we study the provable robustness of RL against norm-bounded adversarial perturbations of the inputs. We focus on smoothing-based provable defenses and propose policy smoothing where the agent adds a Gaussian noise to its observation at each time-step before applying the policy network to make itself less sensitive to adversarial perturbations of its inputs. Our main theoretical contribution is to prove an adaptive version of the Neyman-Pearson Lemma where the adversarial perturbation at a particular time can be a stochastic function of current and previous observations and states as well as previously observed actions. Using this lemma, we adapt the robustness certificates produced by randomized smoothing in the static setting of image classification to the dynamic setting of RL. We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial perturbation of the input. We show that our certificates are tight by constructing a worst-case setting that achieves the bounds derived in our analysis. In our experiments, we show that this method can yield meaningful certificates in complex environments demonstrating its effectiveness against adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2021

CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing

We present the first framework of Certifying Robust Policies for reinfor...
research
02/08/2019

Certified Adversarial Robustness via Randomized Smoothing

Recent work has shown that any classifier which classifies well under Ga...
research
05/31/2018

Sequential Attacks on Agents for Long-Term Adversarial Goals

Reinforcement learning (RL) has advanced greatly in the past few years w...
research
11/15/2020

Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations

Top-k predictions are used in many real-world applications such as machi...
research
08/18/2019

Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation

Deep neural networks are known to be fragile to small adversarial pertur...
research
05/27/2023

Rethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in Multi-Agent RL

Most existing works consider direct perturbations of victim's state/acti...
research
11/16/2020

Adversarially Robust Classification based on GLRT

Machine learning models are vulnerable to adversarial attacks that can o...

Please sign up or login with your details

Forgot password? Click here to reset