A Protection against the Extraction of Neural Network Models
Given oracle access to a Neural Network (NN), it is possible to extract its underlying model. We here introduce a protection by adding parasitic layers which mostly keep unchanged the underlying NN while complexifying the task of reverse-engineering. Our countermeasure relies on approximating the identity mapping with a Convolutional NN. We explain why the introduction of new parasitic layers complexifies the attacks. We report experiments regarding the performance and the accuracy of the protected NN.
READ FULL TEXT