Data Disclosure with Non-zero Leakage and Non-invertible Leakage Matrix
We study a statistical signal processing privacy problem, where an agent observes useful data Y and wants to reveal the information to a user. Since the useful data is correlated with the private data X, the agent employs a privacy mechanism to generate data U that can be released. We study the privacy mechanism design that maximizes the revealed information about Y while satisfying a strong ℓ_1-privacy criterion. When a sufficiently small leakage is allowed, we show that the optimizer vectors of the privacy mechanism design problem have a specific geometry, i.e., they are perturbations of fixed vector distributions. This geometrical structure allows us to use a local approximation of the conditional entropy. By using this approximation the original optimization problem can be reduced to a linear program so that an approximate solution for privacy mechanism can be easily obtained. The main contribution of this work is to consider non-zero leakage with a non-invertible leakage matrix. In an example inspired by water mark application, we first investigate the accuracy of the approximation. Then, we employ different measures for utility and privacy leakage to compare the privacy-utility trade-off using our approach with other methods. In particular, it has been shown that by allowing small leakage, significant utility can be achieved using our method compared to the case where no leakage is allowed.
READ FULL TEXT