Error Estimates for the Variational Training of Neural Networks with Boundary Penalty

03/01/2021
by   Johannes Müller, et al.
0

We establish estimates on the error made by the Ritz method for quadratic energies on the space H^1(Ω) in the approximation of the solution of variational problems with different boundary conditions. Special attention is paid to the case of Dirichlet boundary values which are treated with the boundary penalty method. We consider arbitrary and in general non linear classes V⊆ H^1(Ω) of ansatz functions and estimate the error in dependence of the optimisation accuracy, the approximation capabilities of the ansatz class and - in the case of Dirichlet boundary values - the penalisation strength λ. For non-essential boundary conditions the error of the Ritz method decays with the same rate as the approximation rate of the ansatz classes. For the boundary penalty method we obtain that given an approximation rate of r in H^1(Ω) and an approximation rate of s in L^2(∂Ω) of the ansatz classes, the optimal decay rate of the estimated error is min(s/2, r) ∈ [r/2, r] and achieved by choosing λ_n∼ n^s. We discuss how this rate can be improved, the relation to existing estimates for finite element functions as well as the implications for ansatz classes which are given through ReLU networks. Finally, we use the notion of Γ-convergence to show that the Ritz method converges for a wide class of energies including nonlinear stationary PDEs like the p-Laplace.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset