Improving Differentiable Architecture Search via Self-Distillation

02/11/2023
by   Xunyu Zhu, et al.
0

Differentiable Architecture Search (DARTS) is a simple yet efficient Neural Architecture Search (NAS) method. During the search stage, DARTS trains a supernet by jointly optimizing architecture parameters and network parameters. During the evaluation stage, DARTS derives the optimal architecture based on architecture parameters. However, the loss landscape of the supernet is not smooth, and it results in a performance gap between the supernet and the optimal architecture. In the paper, we propose Self-Distillation Differentiable Neural Architecture Search (SD-DARTS) by utilizing self-distillation to transfer knowledge of the supernet in previous steps to guide the training of the supernet in the current steps. SD-DARTS can minimize the loss difference for the two consecutive iterations so that minimize the sharpness of the supernet's loss to bridge the performance gap between the supernet and the optimal architecture. Furthermore, we propose voted teachers, which select multiple previous supernets as teachers and vote teacher output probabilities as the final teacher prediction. The knowledge of several teachers is more abundant than a single teacher, thus, voted teachers can be more suitable to lead the training of the supernet. Experimental results on real datasets illustrate the advantages of our novel self-distillation-based NAS method compared to state-of-the-art alternatives.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset