Polynomial Optimization: Enhancing RLT relaxations with Conic Constraints
Conic optimization has recently emerged as a powerful tool for designing tractable and guaranteed algorithms for non-convex polynomial optimization problems. On the one hand, tractability is crucial for efficiently solving large-scale problems and, on the other hand, strong bounds are needed to ensure high quality solutions. In this research, we investigate the strengthening of RLT relaxations of polynomial optimization problems through the addition of nine different types of constraints that are based on linear, second-order cone, and semidefinite programming to solve to optimality the instances of well established test sets of polynomial optimization problems. We describe how to design these conic constraints and their performance with respect to each other and with respect to the standard RLT relaxations. Our first finding is that the different variants of nonlinear constraints (second-order cone and semidefinite) are the best performing ones in around 50% of the instances. Additionally, we present a machine learning approach to decide on the most suitable constraints to add for a given instance. The computational results show that the machine learning approach significantly outperforms each and every one of the nine individual approaches.
READ FULL TEXT