Scalable and Interpretable One-class SVMs with Deep Learning and Random Fourier features

04/13/2018
by   Minh-Nghia Nguyen, et al.
0

One-class Support Vector Machine (OC-SVM) for a long time has been one of the most effective anomaly detection methods and widely adopted in both research as well as industrial applications. The biggest issue for OC-SVM is, however, the capability to operate with large and high-dimensional datasets due to inefficient features and optimization complexity. Those problems might be mitigated via dimensionality reduction techniques such as manifold learning or auto-encoder. However, previous work often treats representation learning and anomaly prediction separately. In this paper, we propose autoencoder based one-class SVM (AE-1SVM) that brings OC-SVM, with the aid of random Fourier features to approximate the radial basis kernel, into deep learning context by combining it with a representation learning architecture and jointly exploit stochastic gradient descend to obtain end-to-end training. Interestingly, this also opens up the possible use of gradient-based attribution methods to explain the decision making for anomaly detection, which has ever been challenging as a result of the implicit mappings between the input space and the kernel space. To the best of our knowledge, this is the first work to study the interpretability of deep learning in anomaly detection. We evaluate our method on a wide range of unsupervised anomaly detection tasks in which our end-to-end training architecture achieves a performance significantly better than the previous work using separate training.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset