Computing Maximum Entropy Distributions Everywhere
We study the problem of computing the maximum entropy distribution with a specified expectation over a large discrete domain. Maximum entropy distributions arise and have found numerous applications in economics, machine learning and various sub-disciplines of mathematics and computer science. The key computational questions related to maximum entropy distributions are whether they have succinct descriptions and whether they can be efficiently computed. Here we provide positive answers to both of these questions for very general domains and, importantly, with no restriction on the expectation. This completes the picture left open by the prior work on this problem which requires that the expectation vector is polynomially far in the interior of the convex hull of the domain. As a consequence we obtain a general algorithmic tool and show how it can be applied to derive several old and new results in a unified manner. In particular, our results imply that certain recent continuous optimization formulations, for instance, for discrete counting and optimization problems, the matrix scaling problem, and the worst case Brascamp-Lieb constants in the rank-1 regime, are efficiently computable. Attaining these implications requires reformulating the underlying problem as a version of maximum entropy computation where optimization also involves the expectation vector and, hence, cannot be assumed to be sufficiently deep in the interior. The key new technical ingredient in our work is a polynomial bound on the bit complexity of near-optimal dual solutions to the maximum entropy convex program. This result is obtained by a geometrical reasoning that involves convex analysis and polyhedral geometry, avoiding combinatorial arguments based on the specific structure of the domain. We also provide a lower bound on the bit complexity of near-optimal solutions showing the tightness of our results.
READ FULL TEXT