Entropic alternatives to initialization

07/16/2021
by   Daniele Musso, et al.
0

Local entropic loss functions provide a versatile framework to define architecture-aware regularization procedures. Besides the possibility of being anisotropic in the synaptic space, the local entropic smoothening of the loss function can vary during training, thus yielding a tunable model complexity. A scoping protocol where the regularization is strong in the early-stage of the training and then fades progressively away constitutes an alternative to standard initialization procedures for deep convolutional neural networks, nonetheless, it has wider applicability. We analyze anisotropic, local entropic smoothenings in the language of statistical physics and information theory, providing insight into both their interpretation and workings. We comment some aspects related to the physics of renormalization and the spacetime structure of convolutional networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2013

Convolutional Neural Networks learn compact local image descriptors

A standard deep convolutional neural network paired with a suitable loss...
research
10/02/2020

Effective Regularization Through Loss-Function Metalearning

Loss-function metalearning can be used to discover novel, customized los...
research
06/05/2022

Early Stage Convergence and Global Convergence of Training Mildly Parameterized Neural Networks

The convergence of GD and SGD when training mildly parameterized neural ...
research
01/27/2022

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

In the pursuit of explaining implicit regularization in deep learning, p...
research
05/24/2019

A Polynomial-Based Approach for Architectural Design and Learning with Deep Neural Networks

In this effort we propose a novel approach for reconstructing multivaria...
research
11/13/2022

Layerwise Sparsifying Training and Sequential Learning Strategy for Neural Architecture Adaptation

This work presents a two-stage framework for progressively developing ne...
research
01/28/2022

On feedforward control using physics-guided neural networks: Training cost regularization and optimized initialization

Performance of model-based feedforward controllers is typically limited ...

Please sign up or login with your details

Forgot password? Click here to reset