Globally Consistent Algorithms for Mixture of Experts

02/21/2018
by   Ashok Vardhan Makkuva, et al.
0

Mixture-of-Experts (MoE) is a widely popular neural network architecture and is a basic building block of highly successful modern neural networks, for example, Gated Recurrent Units (GRU) and Attention networks. However, despite the empirical success, finding an efficient and provably consistent algorithm to learn the parameters remains a long standing open problem for more than two decades. In this paper, we introduce the first algorithm that learns the true parameters of a MoE model for a wide class of non-linearities with global consistency guarantees. Our algorithm relies on a novel combination of the EM algorithm and the tensor method of moment techniques. We empirically validate our algorithm on both the synthetic and real data sets in a variety of settings, and show superior performance to standard baselines.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset