Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers

06/14/2021
by   Chace Ashcraft, et al.
0

In this paper, we propose a new data poisoning attack and apply it to deep reinforcement learning agents. Our attack centers on what we call in-distribution triggers, which are triggers native to the data distributions the model will be trained on and deployed in. We outline a simple procedure for embedding these, and other, triggers in deep reinforcement learning agents following a multi-task learning paradigm, and demonstrate in three common reinforcement learning environments. We believe that this work has important implications for the security of deep learning models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset