Best-first Search Algorithm for Non-convex Sparse Minimization

10/03/2019
by   Shinsaku Sakaue, et al.
0

Non-convex sparse minimization (NSM), or ℓ_0-constrained minimization of convex loss functions, is an important optimization problem that has many machine learning applications. NSM is generally NP-hard, and so to exactly solve NSM is almost impossible in polynomial time. As regards the case of quadratic objective functions, exact algorithms based on quadratic mixed-integer programming (MIP) have been studied, but no existing exact methods can handle more general objective functions including Huber and logistic losses; this is unfortunate since those functions are prevalent in practice. In this paper, we consider NSM with ℓ_2-regularized convex objective functions and develop an algorithm by leveraging the efficiency of best-first search (BFS). Our BFS can compute solutions with objective errors at most Δ>0, where Δ is a controllable hyper-parameter that balances the trade-off between the guarantee of objective errors and computation cost. Experiments demonstrate that our BFS is useful for solving moderate-size NSM instances with non-quadratic objectives and that BFS is also faster than the MIP-based method when applied to quadratic objectives.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro