Nonlinear Approximation via Compositions

02/26/2019
by   Zuowei Shen, et al.
0

We study the approximation efficiency of function compositions in nonlinear approximation, especially the case when compositions are implemented using multi-layer feed-forward neural networks (FNNs) with ReLU activation functions. The central question of interest is what are the advantages of function compositions in generating dictionaries and what is the optimal implementation of function compositions via ReLU FNNs, especially in modern computing architecture. This question is answered by studying the N-term approximation rate, which is the decrease in error versus the number of computational nodes (neurons) in the approximant, together with parallel efficiency for the first time. First, for an arbitrary function f on [0,1], regardless of its smoothness and even the continuity, if f can be approximated via nonlinear approximation using one-hidden-layer ReLU FNNs with an approximation rate O(N^-η), we quantitatively show that dictionaries with function compositions via deep ReLU FNNs can improve the approximation rate to O(N^-2η). Second, for Hölder continuous functions of order α with a uniform Lipchitz constant ω on a d-dimensional cube, we show that the N-term approximation via ReLU FNNs with two or three function compositions can achieve an approximation rate O( N^-2α/d). The approximation rate can be improved to O(L^-2α/d) by composing L times, if N is fixed and sufficiently large; but further compositions cannot achieve the approximation rate O(N^-α L/d). Finally, considering the computational efficiency per training iteration in parallel computing, FNNs with O(1) hidden layers are an optimal choice for approximating Hölder continuous functions if computing resources are enough.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset