Stochastic Approximation Approaches to Group Distributionally Robust Optimization
This paper investigates group distributionally robust optimization (GDRO), with the purpose to learn a model that performs well over m different distributions. First, we formulate GDRO as a stochastic convex-concave saddle-point problem, and demonstrate that stochastic mirror descent (SMD), using m samples in each iteration, achieves an O(m (log m)/ϵ^2) sample complexity for finding an ϵ-optimal solution, which matches the Ω(m/ϵ^2) lower bound up to a logarithmic factor. Then, we make use of techniques from online learning to reduce the number of samples required in each round from m to 1, keeping the same sample complexity. Specifically, we cast GDRO as a two-players game where one player simply performs SMD and the other executes an online algorithm for non-oblivious multi-armed bandits. Next, we consider a more practical scenario where the number of samples that can be drawn from each distribution is different, and propose a novel formulation of weighted DRO, which allows us to derive distribution-dependent convergence rates. Denote by n_i the sample budget for the i-th distribution, and assume n_1 ≥ n_2 ≥⋯≥ n_m. In the first approach, we incorporate non-uniform sampling into SMD such that the sample budget is satisfied in expectation, and prove the excess risk of the i-th distribution decreases at an O(√(n_1 log m)/n_i) rate. In the second approach, we use mini-batches to meet the budget exactly and also reduce the variance in stochastic gradients, and then leverage stochastic mirror-prox algorithm, which can exploit small variances, to optimize a carefully designed weighted DRO problem. Under appropriate conditions, it attains an O((log m)/√(n_i)) convergence rate, which almost matches the optimal O(√(1/n_i)) rate of only learning from the i-th distribution with n_i samples.
READ FULL TEXT