Private Learning and Regularized Optimal Transport

05/27/2019
by   Etienne Boursier, et al.
0

Private data are valuable either by remaining private (for instance if they are sensitive) or, on the other hand, by being used publicly to increase some utility. These two objectives are antagonistic and leaking data might be more rewarding than concealing them. Unlike classical concepts of privacy that focus on the first point, we consider instead agents that optimize a natural trade-off between both objectives. We formalize this as an optimization problem where the objective mapping is regularized by the amount of information leaked by the agent into the system (measured as a divergence between the prior and posterior on the private data). Quite surprisingly, when combined with the entropic regularization, the Sinkhorn divergence naturally emerges in the optimization objective, making it efficiently solvable. We apply these techniques to preserve some privacy in online repeated auctions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset