Adaptive Reward-Poisoning Attacks against Reinforcement Learning

03/27/2020
by   Xuezhou Zhang, et al.
7

In reward-poisoning attacks against reinforcement learning (RL), an attacker can perturb the environment reward r_t into r_t+δ_t at each step, with the goal of forcing the RL agent to learn a nefarious policy. We categorize such attacks by the infinity-norm constraint on δ_t: We provide a lower threshold below which reward-poisoning attack is infeasible and RL is certified to be safe; we provide a corresponding upper threshold above which the attack is feasible. Feasible attacks can be further categorized as non-adaptive where δ_t depends only on (s_t,a_t, s_t+1), or adaptive where δ_t depends further on the RL agent's learning process at time t. Non-adaptive attacks have been the focus of prior works. However, we show that under mild conditions, adaptive attacks can achieve the nefarious policy in steps polynomial in state-space size |S|, whereas non-adaptive attacks require exponential steps. We provide a constructive proof that a Fast Adaptive Attack strategy achieves the polynomial rate. Finally, we show that empirically an attacker can find effective reward-poisoning attacks using state-of-the-art deep RL techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2020

Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning

We study a security threat to reinforcement learning where an attacker p...
research
11/21/2020

Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks

We study a security threat to reinforcement learning where an attacker p...
research
07/29/2022

Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation and Complexity Analysis

Meta reinforcement learning (meta RL), as a combination of meta-learning...
research
05/29/2022

On the Robustness of Safe Reinforcement Learning under Observational Perturbations

Safe reinforcement learning (RL) trains a policy to maximize the task re...
research
02/07/2020

Manipulating Reinforcement Learning: Poisoning Attacks on Cost Signals

This chapter studies emerging cyber-attacks on reinforcement learning (R...
research
09/08/2022

Reward Delay Attacks on Deep Reinforcement Learning

Most reinforcement learning algorithms implicitly assume strong synchron...
research
04/08/2023

Evolving Reinforcement Learning Environment to Minimize Learner's Achievable Reward: An Application on Hardening Active Directory Systems

We study a Stackelberg game between one attacker and one defender in a c...

Please sign up or login with your details

Forgot password? Click here to reset