Data-Driven Projection for Reducing Dimensionality of Linear Programs: Generalization Bound and Learning Methods

09/01/2023
by   Shinsaku Sakaue, et al.
0

This paper studies a simple data-driven approach to high-dimensional linear programs (LPs). Given data of past n-dimensional LPs, we learn an n× k projection matrix (n > k), which reduces the dimensionality from n to k. Then, we address future LP instances by solving k-dimensional LPs and recovering n-dimensional solutions by multiplying the projection matrix. This idea is compatible with any user-preferred LP solvers, hence a versatile approach to faster LP solving. One natural question is: how much data is sufficient to ensure the recovered solutions' quality? We address this question based on the idea of data-driven algorithm design, which relates the amount of data sufficient for generalization guarantees to the pseudo-dimension of performance metrics. We present an Õ(nk^2) upper bound on the pseudo-dimension (Õ compresses logarithmic factors) and complement it by an Ω(nk) lower bound, hence tight up to an Õ(k) factor. On the practical side, we study two natural methods for learning projection matrices: PCA- and gradient-based methods. While the former is simple and efficient, the latter sometimes leads to better solution quality. Experiments confirm that learned projection matrices are beneficial for reducing the time for solving LPs while maintaining high solution quality.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset