Block Layer Decomposition schemes for training Deep Neural Networks

03/18/2020
by   Laura Palagi, et al.
0

Deep Feedforward Neural Networks' (DFNNs) weights estimation relies on the solution of a very large nonconvex optimization problem that may have many local (no global) minimizers, saddle points and large plateaus. As a consequence, optimization algorithms can be attracted toward local minimizers which can lead to bad solutions or can slow down the optimization process. Furthermore, the time needed to find good solutions to the training problem depends on both the number of samples and the number of variables. In this work, we show how Block Coordinate Descent (BCD) methods can be applied to improve performance of state-of-the-art algorithms by avoiding bad stationary points and flat regions. We first describe a batch BCD method ables to effectively tackle the network's depth and then we further extend the algorithm proposing a minibatch BCD framework able to scale with respect to both the number of variables and the number of samples by embedding a BCD approach into a minibatch framework. By extensive numerical results on standard datasets for several architecture networks, we show how the application of BCD methods to the training phase of DFNNs permits to outperform standard batch and minibatch algorithms leading to an improvement on both the training phase and the generalization performance of the networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2018

A Proximal Block Coordinate Descent Algorithm for Deep Neural Network Training

Training deep neural networks (DNNs) efficiently is a challenge due to t...
research
03/23/2020

Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses

Despite the fact that the loss functions of deep neural networks are hig...
research
02/28/2018

On the Sublinear Convergence of Randomly Perturbed Alternating Gradient Descent to Second Order Stationary Solutions

The alternating gradient descent (AGD) is a simple but popular algorithm...
research
08/26/2021

The Number of Steps Needed for Nonconvex Optimization of a Deep Learning Optimizer is a Rational Function of Batch Size

Recently, convergence as well as convergence rate analyses of deep learn...
research
10/05/2017

Porcupine Neural Networks: (Almost) All Local Optima are Global

Neural networks have been used prominently in several machine learning a...
research
07/12/2019

An Evolutionary Algorithm of Linear complexity: Application to Training of Deep Neural Networks

The performance of deep neural networks, such as Deep Belief Networks fo...
research
02/17/2023

On Equivalent Optimization of Machine Learning Methods

At the core of many machine learning methods resides an iterative optimi...

Please sign up or login with your details

Forgot password? Click here to reset