Principal Component Networks: Parameter Reduction Early in Training

06/23/2020
by   Roger Waleffe, et al.
0

Recent works show that overparameterized networks contain small subnetworks that exhibit comparable accuracy to the full model when trained in isolation. These results highlight the potential to reduce training costs of deep neural networks without sacrificing generalization performance. However, existing approaches for finding these small networks rely on expensive multi-round train-and-prune procedures and are non-practical for large data sets and models. In this paper, we show how to find small networks that exhibit the same performance as their overparameterized counterparts after only a few training epochs. We find that hidden layer activations in overparameterized networks exist primarily in subspaces smaller than the actual model width. Building on this observation, we use PCA to find a basis of high variance for layer inputs and represent layer weights using these directions. We eliminate all weights not relevant to the found PCA basis and term these network architectures Principal Component Networks. On CIFAR-10 and ImageNet, we show that PCNs train faster and use less energy than overparameterized models, without accuracy loss. We find that our transformation leads to networks with up to 23.8x fewer parameters, with equal or higher end-model accuracy—in some cases we observe improvements up to 3 ResNet-110 networks while training faster.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2018

The Lottery Ticket Hypothesis: Finding Small, Trainable Neural Networks

Neural network compression techniques are able to reduce the parameter c...
research
09/13/2022

Test-Time Adaptation with Principal Component Analysis

Machine Learning models are prone to fail when test data are different f...
research
06/16/2017

Self-adaptive node-based PCA encodings

In this paper we propose an algorithm, Simple Hebbian PCA, and prove tha...
research
12/22/2020

Training Convolutional Neural Networks With Hebbian Principal Component Analysis

Recent work has shown that biologically plausible Hebbian learning can b...
research
10/21/2019

Modeling of Individual HRTFs based on Spatial Principal Component Analysis

Head-related transfer function (HRTF) plays an important role in the con...
research
06/16/2021

Simultaneous Training of Partially Masked Neural Networks

For deploying deep learning models to lower end devices, it is necessary...
research
11/11/2022

Deep equilibrium models as estimators for continuous latent variables

Principal Component Analysis (PCA) and its exponential family extensions...

Please sign up or login with your details

Forgot password? Click here to reset