Graduated Optimization of Black-Box Functions

06/04/2019
by   Weijia Shao, et al.
0

Motivated by the problem of tuning hyperparameters in machine learning, we present a new approach for gradually and adaptively optimizing an unknown function using estimated gradients. We validate the empirical performance of the proposed idea on both low and high dimensional problems. The experimental results demonstrate the advantages of our approach for tuning high dimensional hyperparameters in machine learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2021

Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm

Modern machine learning algorithms usually involve tuning multiple (from...
research
08/19/2019

Towards Assessing the Impact of Bayesian Optimization's Own Hyperparameters

Bayesian Optimization (BO) is a common approach for hyperparameter optim...
research
11/08/2016

Accelerating the BSM interpretation of LHC data with machine learning

The interpretation of Large Hadron Collider (LHC) data in the framework ...
research
08/30/2015

Calibration of One-Class SVM for MV set estimation

A general approach for anomaly detection or novelty detection consists i...
research
08/21/2023

Relax and penalize: a new bilevel approach to mixed-binary hyperparameter optimization

In recent years, bilevel approaches have become very popular to efficien...
research
11/03/2020

AdaDGS: An adaptive black-box optimization method with a nonlocal directional Gaussian smoothing gradient

The local gradient points to the direction of the steepest slope in an i...

Please sign up or login with your details

Forgot password? Click here to reset