Combinatorial Losses through Generalized Gradients of Integer Linear Programs

10/18/2019
by   Xi Gao, et al.
0

When samples have internal structure, we often see a mismatch between the objective optimized during training and the model's goal during inference. For example, in sequence-to-sequence modeling we are interested in high-quality translated sentences, but training typically uses maximum likelihood at the word level. Learning to recognize individual faces from group photos, each captioned with the correct but unordered list of people in it, is another example where a mismatch between training and inference objectives occurs. In both cases, the natural training-time loss would involve a combinatorial problem – dynamic programming-based global sequence alignment and weighted bipartite graph matching, respectively – but solutions to combinatorial problems are not differentiable with respect to their input parameters, so surrogate, differentiable losses are used instead. Here, we show how to perform gradient descent over combinatorial optimization algorithms that involve continuous parameters, for example edge weights, and can be efficiently expressed as integer, linear, or mixed-integer linear programs. We demonstrate usefulness of gradient descent over combinatorial optimization in sequence-to-sequence modeling using differentiable encoder-decoder architecture with softmax or Gumbel-softmax, and in weakly supervised learning involving a convolutional, residual feed-forward network for image classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2022

SurCo: Learning Linear Surrogates For Combinatorial Nonlinear Optimization Problems

Optimization problems with expensive nonlinear cost functions and combin...
research
06/07/2019

Non-Differentiable Supervised Learning with Evolution Strategies and Hybrid Methods

In this work we show that Evolution Strategies (ES) are a viable method ...
research
07/15/2021

Learning Mixed-Integer Linear Programs from Contextual Examples

Mixed-integer linear programs (MILPs) are widely used in artificial inte...
research
08/21/2020

Differentiable TAN Structure Learning for Bayesian Network Classifiers

Learning the structure of Bayesian networks is a difficult combinatorial...
research
07/27/2022

Learning with Combinatorial Optimization Layers: a Probabilistic Approach

Combinatorial optimization (CO) layers in machine learning (ML) pipeline...
research
11/16/2015

Neural Programmer: Inducing Latent Programs with Gradient Descent

Deep neural networks have achieved impressive supervised classification ...
research
06/16/2021

Amortized Synthesis of Constrained Configurations Using a Differentiable Surrogate

In design, fabrication, and control problems, we are often faced with th...

Please sign up or login with your details

Forgot password? Click here to reset