Continuous Submodular Maximization: Boosting via Non-oblivious Function

01/03/2022
by   Qixin Zhang, et al.
0

In this paper, we revisit the constrained and stochastic continuous submodular maximization in both offline and online settings. For each γ-weakly DR-submodular function f, we use the factor-revealing optimization equation to derive an optimal auxiliary function F, whose stationary points provide a (1-e^-γ)-approximation to the global maximum value (denoted as OPT) of problem max_x∈𝒞f(x). Naturally, the projected (mirror) gradient ascent relied on this non-oblivious function achieves (1-e^-γ-ϵ^2)OPT-ϵ after O(1/ϵ^2) iterations, beating the traditional (γ^2/1+γ^2)-approximation gradient ascent <cit.> for submodular maximization. Similarly, based on F, the classical Frank-Wolfe algorithm equipped with variance reduction technique <cit.> also returns a solution with objective value larger than (1-e^-γ-ϵ^2)OPT-ϵ after O(1/ϵ^3) iterations. In the online setting, we first consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious search, achieving a regret of √(D) (where D is the sum of delays of gradient feedback) against a (1-e^-γ)-approximation to the best feasible solution in hindsight. Finally, extensive numerical experiments demonstrate the efficiency of our boosting methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset