On the Universal Approximability of Quantized ReLU Neural Networks

02/10/2018
by   Yukun Ding, et al.
0

Compression is a key step to deploy large neural networks on resource-constrained platforms. As a popular compression technique, quantization constrains the number of distinct weight values and thus reducing the number of bits required to represent and store each weight. In this paper, we study the representation power of quantized neural networks. First, we prove the universal approximability of quantized ReLU networks. Then we provide upper bounds of storage size given the approximation error bound and the bit-width of weights for function-independent and function-dependent structures. To the best of the authors' knowledge, this is the first work on the universal approximability as well as the associated storage size bound of quantized neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset