From Two-Class Linear Discriminant Analysis to Interpretable Multilayer Perceptron Design

09/09/2020
by   Ruiyuan Lin, et al.
6

A closed-form solution exists in two-class linear discriminant analysis (LDA), which discriminates two Gaussian-distributed classes in a multi-dimensional feature space. In this work, we interpret the multilayer perceptron (MLP) as a generalization of a two-class LDA system so that it can handle an input composed by multiple Gaussian modalities belonging to multiple classes. Besides input layer l_in and output layer l_out, the MLP of interest consists of two intermediate layers, l_1 and l_2. We propose a feedforward design that has three stages: 1) from l_in to l_1: half-space partitionings accomplished by multiple parallel LDAs, 2) from l_1 to l_2: subspace isolation where one Gaussian modality is represented by one neuron, 3) from l_2 to l_out: class-wise subspace mergence, where each Gaussian modality is connected to its target class. Through this process, we present an automatic MLP design that can specify the network architecture (i.e., the layer number and the neuron number at a layer) and all filter weights in a feedforward one-pass fashion. This design can be generalized to an arbitrary distribution by leveraging the Gaussian mixture model (GMM). Experiments are conducted to compare the performance of the traditional backpropagation-based MLP (BP-MLP) and the new feedforward MLP (FF-MLP).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset