Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective

02/23/2021
by   Wuyang Chen, et al.
0

Neural Architecture Search (NAS) has been explosively studied to automate the discovery of top-performer neural networks. Current works require heavy training of supernet or intensive architecture evaluations, thus suffering from heavy resource consumption and often incurring search bias due to truncated training or approximations. Can we select the best neural architectures without involving any training and eliminate a drastic portion of the search cost? We provide an affirmative answer, by proposing a novel framework called training-free neural architecture search (TE-NAS). TE-NAS ranks architectures by analyzing the spectrum of the neural tangent kernel (NTK) and the number of linear regions in the input space. Both are motivated by recent theory advances in deep networks and can be computed without any training and any label. We show that: (1) these two measurements imply the trainability and expressivity of a neural network; (2) they strongly correlate with the network's test accuracy. Further on, we design a pruning-based NAS mechanism to achieve a more flexible and superior trade-off between the trainability and expressivity during the search. In NAS-Bench-201 and DARTS search spaces, TE-NAS completes high-quality search but only costs 0.5 and 4 GPU hours with one 1080Ti on CIFAR-10 and ImageNet, respectively. We hope our work inspires more attempts in bridging the theoretical findings of deep networks and practical impacts in real NAS applications. Code is available at: https://github.com/VITA-Group/TENAS.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/26/2021

Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics

This work targets designing a principled and unified training-free frame...
research
07/08/2021

Core-set Sampling for Efficient Neural Architecture Search

Neural architecture search (NAS), an important branch of automatic machi...
research
03/28/2022

Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training?

In Neural Architecture Search (NAS), reducing the cost of architecture e...
research
04/08/2021

A Design Space Study for LISTA and Beyond

In recent years, great success has been witnessed in building problem-sp...
research
06/08/2020

Neural Architecture Search without Training

The time and effort involved in hand-designing deep neural networks is i...
research
05/29/2020

HourNAS: Extremely Fast Neural Architecture Search Through an Hourglass Lens

Neural Architecture Search (NAS) refers to automatically design the arch...
research
12/15/2022

Colab NAS: Obtaining lightweight task-specific convolutional neural networks following Occam's razor

The current trend of applying transfer learning from CNNs trained on lar...

Please sign up or login with your details

Forgot password? Click here to reset