Revisiting Model-Agnostic Private Learning: Faster Rates and Active Learning

11/06/2020
by   Chong Liu, et al.
0

The Private Aggregation of Teacher Ensembles (PATE) framework is one of the most promising recent approaches in differentially private learning. Existing theoretical analysis shows that PATE consistently learns any VC-classes in the realizable setting, but falls short in explaining its success in more general cases where the error rate of the optimal classifier is bounded away from zero. We fill in this gap by introducing the Tsybakov Noise Condition (TNC) and establish stronger and more interpretable learning bounds. These bounds provide new insights into when PATE works and improve over existing results even in the narrower realizable setting. We also investigate the compelling idea of using active learning for saving privacy budget. The novel components in the proofs include a more refined analysis of the majority voting classifier – which could be of independent interest – and an observation that the synthetic "student" learning problem is nearly realizable by construction under the Tsybakov noise condition.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2013

Statistical Active Learning Algorithms for Noise Tolerance and Differential Privacy

We describe a framework for designing efficient active learning algorith...
research
12/22/2015

Refined Error Bounds for Several Learning Algorithms

This article studies the achievable guarantees on the error rates of cer...
research
11/30/2020

Robust and Private Learning of Halfspaces

In this work, we study the trade-off between differential privacy and ad...
research
10/26/2014

Differentially- and non-differentially-private random decision trees

We consider supervised learning with random decision trees, where the tr...
research
04/25/2023

Model Conversion via Differentially Private Data-Free Distillation

While massive valuable deep models trained on large-scale data have been...
research
07/01/2022

When Does Differentially Private Learning Not Suffer in High Dimensions?

Large pretrained models can be privately fine-tuned to achieve performan...
research
03/11/2016

Near-Optimal Active Learning of Halfspaces via Query Synthesis in the Noisy Setting

In this paper, we consider the problem of actively learning a linear cla...

Please sign up or login with your details

Forgot password? Click here to reset