A Protection against the Extraction of Neural Network Models

05/26/2020
by   Hervé Chabanne, et al.
0

Given oracle access to a Neural Network (NN), it is possible to extract its underlying model. We here introduce a protection by adding parasitic layers which mostly keep unchanged the underlying NN while complexifying the task of reverse-engineering. Our countermeasure relies on approximating the identity mapping with a Convolutional NN. We explain why the introduction of new parasitic layers complexifies the attacks. We report experiments regarding the performance and the accuracy of the protected NN.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset