Robustness of ML-Enhanced IDS to Stealthy Adversaries

04/21/2021
by   Vance Wong, et al.
0

Intrusion Detection Systems (IDS) enhanced with Machine Learning (ML) have demonstrated the capacity to efficiently build a prototype of "normal" cyber behaviors in order to detect cyber threats' activity with greater accuracy than traditional rule-based IDS. Because these are largely black boxes, their acceptance requires proof of robustness to stealthy adversaries. Since it is impossible to build a baseline from activity completely clean of that of malicious cyber actors (outside of controlled experiments), the training data for deployed models will be poisoned with examples of activity that analysts would want to be alerted about. We train an autoencoder-based anomaly detection system on network activity with various proportions of malicious activity mixed in and demonstrate that they are robust to this sort of poisoning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset