Minimum Width of Leaky-ReLU Neural Networks for Uniform Universal Approximation

05/29/2023
by   Li'ang Li, et al.
0

The study of universal approximation properties (UAP) for neural networks (NN) has a long history. When the network width is unlimited, only a single hidden layer is sufficient for UAP. In contrast, when the depth is unlimited, the width for UAP needs to be not less than the critical width w^*_min=max(d_x,d_y), where d_x and d_y are the dimensions of the input and output, respectively. Recently, <cit.> shows that a leaky-ReLU NN with this critical width can achieve UAP for L^p functions on a compact domain K, i.e., the UAP for L^p(K,ℝ^d_y). This paper examines a uniform UAP for the function class C(K,ℝ^d_y) and gives the exact minimum width of the leaky-ReLU NN as w_min=max(d_x+1,d_y)+1_d_y=d_x+1, which involves the effects of the output dimensions. To obtain this result, we propose a novel lift-flow-discretization approach that shows that the uniform UAP has a deep connection with topological theory.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro