Õptimal Differentially Private Learning of Thresholds and Quasi-Concave Optimization

11/11/2022
by   Edith Cohen, et al.
0

The problem of learning threshold functions is a fundamental one in machine learning. Classical learning theory implies sample complexity of O(ξ^-1log(1/β)) (for generalization error ξ with confidence 1-β). The private version of the problem, however, is more challenging and in particular, the sample complexity must depend on the size |X| of the domain. Progress on quantifying this dependence, via lower and upper bounds, was made in a line of works over the past decade. In this paper, we finally close the gap for approximate-DP and provide a nearly tight upper bound of Õ(log^* |X|), which matches a lower bound by Alon et al (that applies even with improper learning) and improves over a prior upper bound of Õ((log^* |X|)^1.5) by Kaplan et al. We also provide matching upper and lower bounds of Θ̃(2^log^*|X|) for the additive error of private quasi-concave optimization (a related and more general problem). Our improvement is achieved via the novel Reorder-Slice-Compute paradigm for private data analysis which we believe will have further applications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro