Discrete-time signatures and randomness in reservoir computing

by   Christa Cuchiero, et al.

A new explanation of geometric nature of the reservoir computing phenomenon is presented. Reservoir computing is understood in the literature as the possibility of approximating input/output systems with randomly chosen recurrent neural systems and a trained linear readout layer. Light is shed on this phenomenon by constructing what is called strongly universal reservoir systems as random projections of a family of state-space systems that generate Volterra series expansions. This procedure yields a state-affine reservoir system with randomly generated coefficients in a dimension that is logarithmically reduced with respect to the original system. This reservoir system is able to approximate any element in the fading memory filters class just by training a different linear readout for each different filter. Explicit expressions for the probability distributions needed in the generation of the projected reservoir system are stated and bounds for the committed approximation error are provided.


Infinite-dimensional reservoir computing

Reservoir computing approximation and generalization bounds are proved f...

Simple Cycle Reservoirs are Universal

Reservoir computation models form a subclass of recurrent neural network...

Risk bounds for reservoir computing

We analyze the practices of reservoir computing in the framework of stat...

Learning strange attractors with reservoir systems

This paper shows that the celebrated Embedding Theorem of Takens is a pa...

Reservoir kernels and Volterra series

A universal kernel is constructed whose sections approximate any causal ...

Echo state networks are universal

This paper shows that echo state networks are universal uniform approxim...

Please sign up or login with your details

Forgot password? Click here to reset