Bregman divergence as general framework to estimate unnormalized statistical models

02/14/2012
by   Michael Gutmann, et al.
0

We show that the Bregman divergence provides a rich framework to estimate unnormalized statistical models for continuous or discrete random variables, that is, models which do not integrate or sum to one, respectively. We prove that recent estimation methods such as noise-contrastive estimation, ratio matching, and score matching belong to the proposed framework, and explain their interconnection based on supervised learning. Further, we discuss the role of boosting in unsupervised learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2021

A Unified Framework for Multi-distribution Density Ratio Estimation

Binary density ratio estimation (DRE), the problem of estimating the rat...
research
09/15/2021

Deep Bregman Divergence for Contrastive Learning of Visual Representations

Deep Bregman divergence measures divergence of data points using neural ...
research
06/10/2018

Conditional Noise-Contrastive Estimation of Unnormalised Models

Many parametric statistical models are not properly normalised and only ...
research
01/23/2019

Unified estimation framework for unnormalized models with statistical efficiency

Parameter estimation of unnormalized models is a challenging problem bec...
research
10/16/2020

Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models

The learning and evaluation of energy-based latent variable models (EBLV...
research
04/27/2023

Statistical learning for species distribution models in ecological studies

We discuss species distribution models (SDM) for biodiversity studies in...
research
05/03/2014

Why (and When and How) Contrastive Divergence Works

Contrastive divergence (CD) is a promising method of inference in high d...

Please sign up or login with your details

Forgot password? Click here to reset