Supersymmetric Artificial Neural Network

08/11/2020
โˆ™
by   , et al.
โˆ™
0
โˆ™

The โ€œSupersymmetric Artificial Neural Networkโ€ hypothesis (with the novel-denotation ๐œ™(๐‘ฅ; ๐œƒ, ๐œƒ) โŠค๐‘ค) seeks to explore a regime with the potential to fundamentally use supersymmetric methods to construct artificial neural networks, therein seeking to engender novel, non-trivial contributions to deep or hierarchical artificial learning. Looking at the progression of โ€˜solution geometriesโ€™; going from ๐‘†๐‘‚(๐‘›) representation (such as Perceptron like models) to ๐‘†๐‘ˆ(๐‘›) representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable. The Supersymmetric Artificial Neural Network hypothesis explores a natural step forward, namely ๐‘†๐‘ˆ(๐‘š|๐‘›) representation. These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation ๐‘†๐‘ˆ(๐‘š|๐‘›), or ๐œ™(๐‘ฅ; ๐œƒ, ๐œƒ) โŠค๐‘ค parameterized by ๐œƒ, ๐œƒ, which are supersymmetric directions, unlike ๐œƒ seen in the typical non-supersymmetric deep learning model. Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of โ€œpartner potentialโ€ signals for example. This paper does not contain empirical code concerning supersymmetric artificial neural networks, although it does highlight empirical evidence, that indicates how such types of supersymmetric learning models could exceed the state of the art, due to preservation features seen in progressing through earlier related models from the days of older perceptron like models that were not supersymmetric.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset