Python Wrapper for Simulating Multi-Fidelity Optimization on HPO Benchmarks without Any Wait
Hyperparameter (HP) optimization of deep learning (DL) is essential for high performance. As DL often requires several hours to days for its training, HP optimization (HPO) of DL is often prohibitively expensive. This boosted the emergence of tabular or surrogate benchmarks, which enable querying the (predictive) performance of DL with a specific HP configuration in a fraction. However, since actual runtimes of a DL training are significantly different from query response times, in a naive implementation, simulators of an asynchronous HPO, e.g. multi-fidelity optimization, must wait for the actual runtimes at each iteration; otherwise, the evaluation order in the simulator does not match with the real experiment. To ease this issue, we develop a Python wrapper to force each worker to wait in order to match the evaluation order with the real experiment and describe the usage. Our implementation reduces the waiting time to 0.01 seconds and it is available at https://github.com/nabenabe0928/mfhpo-simulator/.
READ FULL TEXT