Alternating Directions Dual Decomposition

12/28/2012
by   André F. T. Martins, et al.
0

We propose AD3, a new algorithm for approximate maximum a posteriori (MAP) inference on factor graphs based on the alternating directions method of multipliers. Like dual decomposition algorithms, AD3 uses worker nodes to iteratively solve local subproblems and a controller node to combine these local solutions into a global update. The key characteristic of AD3 is that each local subproblem has a quadratic regularizer, leading to a faster consensus than subgradient-based dual decomposition, both theoretically and in practice. We provide closed-form solutions for these AD3 subproblems for binary pairwise factors and factors imposing first-order logic constraints. For arbitrary factors (large or combinatorial), we introduce an active set method which requires only an oracle for computing a local MAP configuration, making AD3 applicable to a wide range of problems. Experiments on synthetic and realworld problems show that AD3 compares favorably with the state-of-the-art.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2012

Efficiently Searching for Frustrated Cycles in MAP Inference

Dual decomposition provides a tractable framework for designing algorith...
research
11/22/2016

Alternating Direction Graph Matching

In this paper, we introduce a graph matching method that can account for...
research
11/09/2015

Decomposition Bounds for Marginal MAP

Marginal MAP inference involves making MAP predictions in systems define...
research
12/16/2016

A Dual Ascent Framework for Lagrangean Decomposition of Combinatorial Problems

We propose a general dual ascent framework for Lagrangean decomposition ...
research
01/13/2020

LP-SparseMAP: Differentiable Relaxed Optimization for Sparse Structured Prediction

Structured prediction requires manipulating a large number of combinator...
research
10/12/2022

Optimizing Evaluation Metrics for Multi-Task Learning via the Alternating Direction Method of Multipliers

Multi-task learning (MTL) aims to improve the generalization performance...
research
01/31/2020

Testing Unsatisfiability of Constraint Satisfaction Problems via Tensor Products

We study the design of stochastic local search methods to prove unsatisf...

Please sign up or login with your details

Forgot password? Click here to reset