Maximizing Monotone DR-submodular Continuous Functions by Derivative-free Optimization

10/16/2018
by   Yibo Zhang, et al.
0

In this paper, we study the problem of monotone (weakly) DR-submodular continuous maximization. While previous methods require the gradient information of the objective function, we propose a derivative-free algorithm LDGM for the first time. We define β and α to characterize how close a function is to continuous DR-submodulr and submodular, respectively. Under a convex polytope constraint, we prove that LDGM can achieve a (1-e^-β-ϵ)-approximation guarantee after O(1/ϵ) iterations, which is the same as the best previous gradient-based algorithm. Moreover, in some special cases, a variant of LDGM can achieve a ((α/2)(1-e^-α)-ϵ)-approximation guarantee for (weakly) submodular functions. We also compare LDGM with the gradient-based algorithm Frank-Wolfe under noise, and show that LDGM can be more robust. Empirical results on budget allocation verify the effectiveness of LDGM.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset