Supersymmetric Artificial Neural Network
The โSupersymmetric Artificial Neural Networkโ hypothesis (with the novel-denotation ๐(๐ฅ; ๐, ๐) โค๐ค) seeks to explore a regime with the potential to fundamentally use supersymmetric methods to construct artificial neural networks, therein seeking to engender novel, non-trivial contributions to deep or hierarchical artificial learning. Looking at the progression of โsolution geometriesโ; going from ๐๐(๐) representation (such as Perceptron like models) to ๐๐(๐) representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable. The Supersymmetric Artificial Neural Network hypothesis explores a natural step forward, namely ๐๐(๐|๐) representation. These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation ๐๐(๐|๐), or ๐(๐ฅ; ๐, ๐) โค๐ค parameterized by ๐, ๐, which are supersymmetric directions, unlike ๐ seen in the typical non-supersymmetric deep learning model. Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of โpartner potentialโ signals for example. This paper does not contain empirical code concerning supersymmetric artificial neural networks, although it does highlight empirical evidence, that indicates how such types of supersymmetric learning models could exceed the state of the art, due to preservation features seen in progressing through earlier related models from the days of older perceptron like models that were not supersymmetric.
READ FULL TEXT