On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks

11/11/2020
by   Ramy E. Ali, et al.
0

Outsourcing neural network inference tasks to an untrusted cloud raises data privacy and integrity concerns. In order to address these challenges, several privacy-preserving and verifiable inference techniques have been proposed based on replacing the non-polynomial activation functions such as the rectified linear unit (ReLU) function with polynomial activation functions. Such techniques usually require the polynomial coefficients to be in a finite field. Motivated by such requirements, several works proposed replacing the ReLU activation function with the square activation function. In this work, we empirically show that the square function is not the best second-degree polynomial that can replace the ReLU function in deep neural networks. We instead propose a second-degree polynomial activation function with a first order term and empirically show that it can lead to much better models. Our experiments on the CIFAR-10 dataset show that our proposed polynomial activation function significantly outperforms the square activation function.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset