Online Non-Monotone DR-submodular Maximization

09/25/2019
by   Nguyen Kim Thang, et al.
0

In this paper, we study problems at the interface of two important fields: submodular optimization and online learning. Submodular functions play a vital role in modelling cost functions that naturally arise in many areas of discrete optimization. These functions have been studied under various models of computation. Independently, submodularity has been considered in continuous domains. In fact, many problems arising in machine learning and statistics have been modelled using continuous DR-submodular functions. In this work, we are study the problem of maximizing non-monotone continuous DR-submodular functions within the framework of online learning. We provide three main results. First, we present an online algorithm (in full-information setting) that achieves an approximation guarantee (depending on the search space) for the problem of maximizing non-monotone continuous DR-submodular functions over a general convex domain. To best of our knowledge, no prior approximation algorithm in full-information setting was known for the non-monotone continuous DR submodular functions even for the down-closed convex domain. Second, we show that the online stochastic mirror ascent algorithm (in full information setting) achieves an improved approximation ratio of (1/4) for maximizing the non-monotone continuous DR-submodular functions over a down-closed convex domain. At last, we extend our second result to the bandit setting where we present the first approximation guarantee of (1/4). To best of our knowledge, no approximation algorithm for non-monotone submodular maximization was known in the bandit setting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset