Pareto-aware Neural Architecture Generation for Diverse Computational Budgets

10/14/2022
by   Yong Guo, et al.
0

Designing feasible and effective architectures under diverse computational budgets, incurred by different applications/devices, is essential for deploying deep models in real-world applications. To achieve this goal, existing methods often perform an independent architecture search process for each target budget, which is very inefficient yet unnecessary. More critically, these independent search processes cannot share their learned knowledge (i.e., the distribution of good architectures) with each other and thus often result in limited search results. To address these issues, we propose a Pareto-aware Neural Architecture Generator (PNAG) which only needs to be trained once and dynamically produces the Pareto optimal architecture for any given budget via inference. To train our PNAG, we learn the whole Pareto frontier by jointly finding multiple Pareto optimal architectures under diverse budgets. Such a joint search algorithm not only greatly reduces the overall search cost but also improves the search results. Extensive experiments on three hardware platforms (i.e., mobile device, CPU, and GPU) show the superiority of our method over existing methods.

READ FULL TEXT
research
02/27/2021

Pareto-Frontier-aware Neural Architecture Generation for Diverse Budgets

Designing feasible and effective architectures under diverse computation...
research
08/23/2022

FocusFormer: Focusing on What We Need via Architecture Sampler

Vision Transformers (ViTs) have underpinned the recent breakthroughs in ...
research
06/21/2018

DPP-Net: Device-aware Progressive Search for Pareto-optimal Neural Architectures

Recent breakthroughs in Neural Architectural Search (NAS) have achieved ...
research
07/01/2021

AdaXpert: Adapting Neural Architecture for Growing Data

In real-world applications, data often come in a growing manner, where t...
research
06/11/2020

NADS: Neural Architecture Distribution Search for Uncertainty Awareness

Machine learning (ML) systems often encounter Out-of-Distribution (OoD) ...
research
12/16/2020

Distilling Optimal Neural Networks: Rapid Search in Diverse Spaces

This work presents DONNA (Distilling Optimal Neural Network Architecture...
research
06/06/2020

Conditional Neural Architecture Search

Designing resource-efficient Deep Neural Networks (DNNs) is critical to ...

Please sign up or login with your details

Forgot password? Click here to reset