Explicit Computation of Input Weights in Extreme Learning Machines

06/11/2014
by   Jonathan Tapson, et al.
0

We present a closed form expression for initializing the input weights in a multi-layer perceptron, which can be used as the first step in synthesis of an Extreme Learning Ma-chine. The expression is based on the standard function for a separating hyperplane as computed in multilayer perceptrons and linear Support Vector Machines; that is, as a linear combination of input data samples. In the absence of supervised training for the input weights, random linear combinations of training data samples are used to project the input data to a higher dimensional hidden layer. The hidden layer weights are solved in the standard ELM fashion by computing the pseudoinverse of the hidden layer outputs and multiplying by the desired output values. All weights for this method can be computed in a single pass, and the resulting networks are more accurate and more consistent on some standard problems than regular ELM networks of the same size.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2020

An Effective and Efficient Training Algorithm for Multi-layer Feedforward Neural Networks

Network initialization is the first and critical step for training neura...
research
08/16/2019

The Partial Response Network

We propose a method to open the black box of the Multi-Layer Perceptron ...
research
05/16/2020

An Effective and Efficient Initialization Scheme for Multi-layer Feedforward Neural Networks

Network initialization is the first and critical step for training neura...
research
08/11/2023

Automated Sizing and Training of Efficient Deep Autoencoders using Second Order Algorithms

We propose a multi-step training method for designing generalized linear...
research
06/02/2023

MLP-Mixer as a Wide and Sparse MLP

Multi-layer perceptron (MLP) is a fundamental component of deep learning...
research
01/21/2021

Superiorities of Deep Extreme Learning Machines against Convolutional Neural Networks

Deep Learning (DL) is a machine learning procedure for artificial intell...
research
11/14/2017

Exploiting Layerwise Convexity of Rectifier Networks with Sign Constrained Weights

By introducing sign constraints on the weights, this paper proposes sign...

Please sign up or login with your details

Forgot password? Click here to reset