SplitNet: Divide and Co-training

11/30/2020
by   Shuai Zhao, et al.
10

The width of a neural network matters since increasing the width will necessarily increase the model capacity. However, the performance of a network does not improve linearly with the width and soon gets saturated. To tackle this problem, we propose to increase the number of networks rather than purely scaling up the width. To prove it, one large network is divided into several small ones, and each of these small networks has a fraction of the original one's parameters. We then train these small networks together and make them see various views of the same data to learn different and complementary knowledge. During this co-training process, networks can also learn from each other. As a result, small networks can achieve better ensemble performance than the large one with few or no extra parameters or FLOPs. This reveals that the number of networks is a new dimension of effective model scaling, besides depth/width/resolution. Small networks can also achieve faster inference speed than the large one by concurrent running on different devices. We validate the idea – increasing the number of networks is a new dimension of effective model scaling – with different network architectures on common benchmarks through extensive experiments. The code is available at <https://github.com/mzhaoshuai/SplitNet-Divide-and-Co-training>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2019

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

Convolutional Neural Networks (ConvNets) are commonly developed at a fix...
research
12/21/2022

Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective

Efforts to improve the adversarial robustness of convolutional neural ne...
research
03/12/2019

Universally Slimmable Networks and Improved Training Techniques

Slimmable networks are a family of neural networks that can instantly ad...
research
03/11/2021

Fast and Accurate Model Scaling

In this work we analyze strategies for convolutional neural network scal...
research
04/03/2017

Truncating Wide Networks using Binary Tree Architectures

Recent study shows that a wide deep network can obtain accuracy comparab...
research
11/16/2020

Scaled-YOLOv4: Scaling Cross Stage Partial Network

We show that the YOLOv4 object detection neural network based on the CSP...

Please sign up or login with your details

Forgot password? Click here to reset