A Practical Bandit Method with Advantages in Neural Network Tuning
Stochastic bandit algorithms can be used for challenging non-convex optimization problems. Hyperparameter tuning of neural networks is particularly challenging, necessitating new approaches. To this end, we present a method that adaptively partitions the combined space of hyperparameters, context, and training resources (e.g., total number of training iterations). By adaptively partitioning the space, the algorithm is able to focus on the portions of the hyperparameter search space that are most relevant in a practical way. By including the resources in the combined space, the method tends to use fewer training resources overall. Our experiments show that this method can surpass state-of-the-art methods in tuning neural networks on benchmark datasets. In some cases, our implementations can achieve the same levels of accuracy on benchmark datasets as existing state-of-the-art approaches while saving over 50
READ FULL TEXT