Never look back - The EnKF method and its application to the training of neural networks without back propagation

05/21/2018
by   Eldad Haber, et al.
0

In this work, we present a new derivative-free optimization method and investigate its use for training neural networks. Our method is motivated by the Ensemble Kalman Filter (EnKF), which has been used successfully for solving optimization problems that involve large-scale, highly nonlinear dynamical systems. A key benefit of the EnKF method is that it requires only the evaluation of the forward propagation but not its derivatives. Hence, in the context of neural networks it alleviates the need for back propagation and reduces the memory consumption dramatically. However, the method is not a pure "black-box" global optimization heuristic as it efficiently utilizes the structure of typical learning problems. We propose an important modification of the EnKF that enables us to prove convergence of our method to the minimizer of a strongly convex function. Our method also bears similarity with implicit filtering and we demonstrate its potential for minimizing highly oscillatory functions using a simple example. Further, we provide numerical examples that demonstrate the potential of our method for training deep neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2018

Never look back - A modified EnKF method and its application to the training of neural networks without back propagation

In this work, we present a new derivative-free optimization method and i...
research
11/06/2019

Convergence Acceleration of Ensemble Kalman Inversion in Nonlinear Settings

Many data-science problems can be formulated as an inverse problem, wher...
research
09/15/2015

Adapting Resilient Propagation for Deep Learning

The Resilient Propagation (Rprop) algorithm has been very popular for ba...
research
07/15/2023

Gradient-free training of neural ODEs for system identification and control using ensemble Kalman inversion

Ensemble Kalman inversion (EKI) is a sequential Monte Carlo method used ...
research
12/19/2014

Qualitatively characterizing neural network optimization problems

Training neural networks involves solving large-scale non-convex optimiz...
research
10/26/2022

A Variational Inequality Model for Learning Neural Networks

Neural networks have become ubiquitous tools for solving signal and imag...
research
05/20/2021

Bayesian Calibration for Large-Scale Fluid Structure Interaction Problems Under Embedded/Immersed Boundary Framework

Bayesian calibration is widely used for inverse analysis and uncertainty...

Please sign up or login with your details

Forgot password? Click here to reset