Some Inapproximability Results of MAP Inference and Exponentiated Determinantal Point Processes

09/02/2021
by   Naoto Ohsaka, et al.
0

We study the computational complexity of two hard problems on determinantal point processes (DPPs). One is maximum a posteriori (MAP) inference, i.e., to find a principal submatrix having the maximum determinant. The other is probabilistic inference on exponentiated DPPs (E-DPPs), which can sharpen or weaken the diversity preference of DPPs with an exponent parameter p. We prove the following complexity-theoretic hardness results that explain the difficulty in approximating MAP inference and the normalizing constant for E-DPPs. 1. Unconstrained MAP inference for an n × n matrix is NP-hard to approximate within a factor of 2^β n, where β = 10^-10^13. This result improves upon a (9/8-ϵ)-factor inapproximability given by Kulesza and Taskar (2012). 2. Log-determinant maximization is NP-hard to approximate within a factor of 5/4 for the unconstrained case and within a factor of 1+10^-10^13 for the size-constrained monotone case. 3. The normalizing constant for E-DPPs of any (fixed) constant exponent p ≥β^-1 = 10^10^13 is NP-hard to approximate within a factor of 2^β pn. This gives a(nother) negative answer to open questions posed by Kulesza and Taskar (2012); Ohsaka and Matsuoka (2020).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro