When does mixup promote local linearity in learned representations?

10/28/2022
by   Arslan Chaudhry, et al.
0

Mixup is a regularization technique that artificially produces new samples using convex combinations of original training points. This simple technique has shown strong empirical performance, and has been heavily used as part of semi-supervised learning techniques such as mixmatch <cit.> and interpolation consistent training (ICT) <cit.>. In this paper, we look at Mixup through a representation learning lens in a semi-supervised learning setup. In particular, we study the role of Mixup in promoting linearity in the learned network representations. Towards this, we study two questions: (1) how does the Mixup loss that enforces linearity in the last network layer propagate the linearity to the earlier layers?; and (2) how does the enforcement of stronger Mixup loss on more than two data points affect the convergence of training? We empirically investigate these properties of Mixup on vision datasets such as CIFAR-10, CIFAR-100 and SVHN. Our results show that supervised Mixup training does not make all the network layers linear; in fact the intermediate layers become more non-linear during Mixup training compared to a network that is trained without Mixup. However, when Mixup is used as an unsupervised loss, we observe that all the network layers become more linear resulting in faster training convergence.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset