Boosted Density Estimation Remastered

03/22/2018
by   Zac Cranko, et al.
0

There has recently been a steadily increase in the iterative approaches to boosted density estimation and sampling, usually proceeding by adding candidate "iterate" densities to a model that gets more accurate with iterations. The relative accompanying burst of formal convergence results has not yet changed a striking picture: all results essentially pay the price of heavy assumptions on iterates, often unrealistic or hard to check, and offer a blatant contrast with the original boosting theory where such assumptions would be the weakest possible. In this paper, we show that all that suffices to achieve boosting for density estimation is a weak learner in the original boosting theory sense, that is, an oracle that supplies classifiers. We provide converge rates that comply with boosting requirements, being better and / or relying on substantially weaker assumptions than the state of the art. One of our rates is to our knowledge the first to rely on not just weak but also empirically testable assumptions. We show that the model fit belongs to exponential families, and obtain in the course of our results a variational characterization of f-divergences better than f-GAN's. Experimental results on several simulated problems display significantly better results than AdaGAN during early boosting rounds, in particular for mode capture, and using architectures less than the fifth's of AdaGAN's size.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset