LOFT: Finding Lottery Tickets through Filter-wise Training

by   Qihan Wang, et al.

Recent work on the Lottery Ticket Hypothesis (LTH) shows that there exist “winning tickets” in large neural networks. These tickets represent “sparse” versions of the full model that can be trained independently to achieve comparable accuracy with respect to the full model. However, finding the winning tickets requires one to pretrain the large model for at least a number of epochs, which can be a burdensome task, especially when the original neural network gets larger. In this paper, we explore how one can efficiently identify the emergence of such winning tickets, and use this observation to design efficient pretraining algorithms. For clarity of exposition, our focus is on convolutional neural networks (CNNs). To identify good filters, we propose a novel filter distance metric that well-represents the model convergence. As our theory dictates, our filter analysis behaves consistently with recent findings of neural network learning dynamics. Motivated by these observations, we present the LOttery ticket through Filter-wise Training algorithm, dubbed as LoFT. LoFT is a model-parallel pretraining algorithm that partitions convolutional layers by filters to train them independently in a distributed setting, resulting in reduced memory and communication costs during pretraining. Experiments show that LoFT i) preserves and finds good lottery tickets, while ii) it achieves non-trivial computation and communication savings, and maintains comparable or even better accuracy than other pretraining methods.


Unsupervised Pretraining Encourages Moderate-Sparseness

It is well known that direct training of deep neural networks will gener...

Training CNNs with Low-Rank Filters for Efficient Image Classification

We propose a new method for creating computationally efficient convoluti...

Batch Normalization Tells You Which Filter is Important

The goal of filter pruning is to search for unimportant filters to remov...

Filter sharing: Efficient learning of parameters for volumetric convolutions

Typical convolutional neural networks (CNNs) have several millions of pa...

Band-limited Training and Inference for Convolutional Neural Networks

The convolutional layers are core building blocks of neural network arch...

GaterNet: Dynamic Filter Selection in Convolutional Neural Network via a Dedicated Global Gating Network

The concept of conditional computation for deep nets has been proposed p...

Please sign up or login with your details

Forgot password? Click here to reset