PAC-Bayes with Backprop

08/19/2019
by   Omar Rivasplata, et al.
4

We explore a method to train probabilistic neural networks by minimizing risk upper bounds, specifically, PAC-Bayes bounds. Thus randomization is not just part of a proof strategy, but part of the learning algorithm itself. We derive two training objectives, one from a previously known PAC-Bayes bound, and a second one from a novel PAC-Bayes bound. We evaluate both training objectives on various data sets and demonstrate the tightness of the risk upper bounds achieved by our method. Our training objectives have sound theoretical justification, and lead to self-bounding learning where all the available data may be used to learn a predictor and certify its risk, with no need to follow a data-splitting protocol.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset