CLOSE: Curriculum Learning On the Sharing Extent Towards Better One-shot NAS

by   Zixuan Zhou, et al.

One-shot Neural Architecture Search (NAS) has been widely used to discover architectures due to its efficiency. However, previous studies reveal that one-shot performance estimations of architectures might not be well correlated with their performances in stand-alone training because of the excessive sharing of operation parameters (i.e., large sharing extent) between architectures. Thus, recent methods construct even more over-parameterized supernets to reduce the sharing extent. But these improved methods introduce a large number of extra parameters and thus cause an undesirable trade-off between the training costs and the ranking quality. To alleviate the above issues, we propose to apply Curriculum Learning On Sharing Extent (CLOSE) to train the supernet both efficiently and effectively. Specifically, we train the supernet with a large sharing extent (an easier curriculum) at the beginning and gradually decrease the sharing extent of the supernet (a harder curriculum). To support this training strategy, we design a novel supernet (CLOSENet) that decouples the parameters from operations to realize a flexible sharing scheme and adjustable sharing extent. Extensive experiments demonstrate that CLOSE can obtain a better ranking quality across different computational budget constraints than other one-shot supernets, and is able to discover superior architectures when combined with various search strategies. Code is available at


page 1

page 2

page 3

page 4


PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search

The wide application of pre-trained models is driving the trend of once-...

Prior-Guided One-shot Neural Architecture Search

Neural architecture search methods seek optimal candidates with efficien...

Understanding and Improving One-shot Neural Architecture Optimization

The ability of accurately ranking candidate architectures is the key to ...

RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies

Neural architecture search (NAS) has made tremendous progress in the aut...

BS-NAS: Broadening-and-Shrinking One-Shot NAS with Searchable Numbers of Channels

One-Shot methods have evolved into one of the most popular methods in Ne...

AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision of Weight Sharing

Architecture performance predictors have been widely used in neural arch...

PA DA: Jointly Sampling PAth and DAta for Consistent NAS

Based on the weight-sharing mechanism, one-shot NAS methods train a supe...

Please sign up or login with your details

Forgot password? Click here to reset