A single hidden layer feedforward network with only one neuron in the hidden layer can approximate any univariate function

12/31/2015
by   Namig J. Guliyev, et al.
0

The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this paper, we consider constructive approximation on any finite interval of R by neural networks with only one neuron in the hidden layer. We construct algorithmically a smooth, sigmoidal, almost monotone activation function σ providing approximation to an arbitrary continuous function within any degree of accuracy. This algorithm is implemented in a computer program, which computes the value of σ at any reasonable point of the real axis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/22/2020

Approximation capability of two hidden layer feedforward neural networks with fixed weights

We algorithmically construct a two hidden layer feedforward neural netwo...
research
09/27/2022

Continuous approximation by convolutional neural networks with a sigmoidal function

In this paper we present a class of convolutional neural networks (CNNs)...
research
04/04/2019

Preference Neural Network

This paper proposes a preference neural network (PNN) to address the pro...
research
07/26/2019

Two-hidden-layer Feedforward Neural Networks are Universal Approximators: A Constructive Approach

It is well known that Artificial Neural Networks are universal approxima...
research
02/06/2020

Global Convergence of Frank Wolfe on One Hidden Layer Networks

We derive global convergence bounds for the Frank Wolfe algorithm when t...
research
03/18/2014

Similarity networks for classification: a case study in the Horse Colic problem

This paper develops a two-layer neural network in which the neuron model...
research
10/28/2019

On approximating ∇ f with neural networks

Consider a feedforward neural network ψ: R^d→R^d such that ψ≈∇ f, where ...

Please sign up or login with your details

Forgot password? Click here to reset