Enabling On-Device CNN Training by Self-Supervised Instance Filtering and Error Map Pruning

by   Yawen Wu, et al.

This work aims to enable on-device training of convolutional neural networks (CNNs) by reducing the computation cost at training time. CNN models are usually trained on high-performance computers and only the trained models are deployed to edge devices. But the statically trained model cannot adapt dynamically in a real environment and may result in low accuracy for new inputs. On-device training by learning from the real-world data after deployment can greatly improve accuracy. However, the high computation cost makes training prohibitive for resource-constrained devices. To tackle this problem, we explore the computational redundancies in training and reduce the computation cost by two complementary approaches: self-supervised early instance filtering on data level and error map pruning on the algorithm level. The early instance filter selects important instances from the input stream to train the network and drops trivial ones. The error map pruning further prunes out insignificant computations when training with the selected instances. Extensive experiments show that the computation cost is substantially reduced without any or with marginal accuracy loss. For example, when training ResNet-110 on CIFAR-10, we achieve 68 accuracy and 75 Aggressive computation saving of 96 loss when quantization is integrated into the proposed approaches. Besides, when training LeNet on MNIST, we save 79 by 0.2


page 1

page 3

page 11


E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving

Convolutional neural networks (CNNs) have been increasingly deployed to ...

Recursive Least Squares for Training and Pruning Convolutional Neural Networks

Convolutional neural networks (CNNs) have succeeded in many practical ap...

Now that I can see, I can improve: Enabling data-driven finetuning of CNNs on the edge

In today's world, a vast amount of data is being generated by edge devic...

Group Knowledge Transfer: Collaborative Training of Large CNNs on the Edge

Scaling up the convolutional neural network (CNN) size (e.g., width, dep...

Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

It is desirable to train convolutional networks (CNNs) to run more effic...

Two Novel Performance Improvements for Evolving CNN Topologies

Convolutional Neural Networks (CNNs) are the state-of-the-art algorithms...

It's always personal: Using Early Exits for Efficient On-Device CNN Personalisation

On-device machine learning is becoming a reality thanks to the availabil...

Please sign up or login with your details

Forgot password? Click here to reset