Approximating Continuous Functions by ReLU Nets of Minimal Width

10/31/2017
by   Boris Hanin, et al.
0

This article concerns the expressive power of depth in deep feed-forward neural nets with ReLU activations. Specifically, we answer the following question: for a fixed d≥ 1, what is the minimal width w so that neural nets with ReLU activations, input dimension d, hidden layer widths at most w, and arbitrary depth can approximate any continuous function of d variables arbitrarily well. It turns out that this minimal width is exactly equal to d+1. That is, if all the hidden layer widths are bounded by d, then even in the infinite depth limit, ReLU nets can only express a very limited class of functions. On the other hand, we show that any continuous function on the d-dimensional unit cube can be approximated to arbitrary precision by ReLU nets in which all hidden layers have width exactly d+1. Our construction gives quantitative depth estimates for such an approximation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro