Stochastic Descent Analysis of Representation Learning Algorithms

12/18/2014
by   Richard M. Golden, et al.
0

Although stochastic approximation learning methods have been widely used in the machine learning literature for over 50 years, formal theoretical analyses of specific machine learning algorithms are less common because stochastic approximation theorems typically possess assumptions which are difficult to communicate and verify. This paper presents a new stochastic approximation theorem for state-dependent noise with easily verifiable assumptions applicable to the analysis and design of important deep learning algorithms including: adaptive learning, contrastive divergence learning, stochastic descent expectation maximization, and active learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2022

Formalization of a Stochastic Approximation Theorem

Stochastic approximation algorithms are iterative procedures which are u...
research
07/01/2016

A scaled Bregman theorem with applications

Bregman divergences play a central role in the design and analysis of a ...
research
02/22/2021

Nonparametric adaptive active learning under local smoothness condition

Active learning is typically used to label data, when the labeling proce...
research
07/26/2021

Compensation Learning

Weighting strategy prevails in machine learning. For example, a common a...
research
12/23/2018

Computations in Stochastic Acceptors

Machine learning provides algorithms that can learn from data and make i...
research
09/14/2020

Synbols: Probing Learning Algorithms with Synthetic Datasets

Progress in the field of machine learning has been fueled by the introdu...
research
05/31/2019

Data-driven Algorithm Selection and Parameter Tuning: Two Case studies in Optimization and Signal Processing

Machine learning algorithms typically rely on optimization subroutines a...

Please sign up or login with your details

Forgot password? Click here to reset