Random search and reproducibility for neural architecture search

L Li, A Talwalkar - Uncertainty in artificial intelligence, 2020 - proceedings.mlr.press
Uncertainty in artificial intelligence, 2020proceedings.mlr.press
Neural architecture search (NAS) is a promising research direction that has the potential to
replace expert-designed networks with learned, task-specific architectures. In order to help
ground the empirical results in this field, we propose new NAS baselines that build off the
following observations:(i) NAS is a specialized hyperparameter optimization problem; and
(ii) random search is a competitive baseline for hyperparameter optimization. Leveraging
these observations, we evaluate both random search with early-stopping and a novel …
Abstract
Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures. In order to help ground the empirical results in this field, we propose new NAS baselines that build off the following observations:(i) NAS is a specialized hyperparameter optimization problem; and (ii) random search is a competitive baseline for hyperparameter optimization. Leveraging these observations, we evaluate both random search with early-stopping and a novel random search with weight-sharing algorithm on two standard NAS benchmarks—PTB and CIFAR-10. Our results show that random search with early-stopping is a competitive NAS baseline, eg, it performs at least as well as ENAS, a leading NAS method, on both benchmarks. Additionally, random search with weight-sharing outperforms random search with early-stopping, achieving a state-of-the-art NAS result on PTB and a highly competitive result on CIFAR-10. Finally, we explore the existing reproducibility issues of published NAS results.
proceedings.mlr.press
Showing the best result for this search. See all results