Intertwiners between Induced Representations (with Applications to the Theory of Equivariant Neural Networks)

by   Taco S Cohen, et al.

Group equivariant and steerable convolutional neural networks (regular and steerable G-CNNs) have recently emerged as a very effective model class for learning from signal data such as 2D and 3D images, video, and other data where symmetries are present. In geometrical terms, regular G-CNNs represent data in terms of scalar fields ("feature channels"), whereas the steerable G-CNN can also use vector or tensor fields ("capsules") to represent data. In algebraic terms, the feature spaces in regular G-CNNs transform according to a regular representation of the group G, whereas the feature spaces in Steerable G-CNNs transform according to the more general induced representations of G. In order to make the network equivariant, each layer in a G-CNN is required to intertwine between the induced representations associated with its input and output space. In this paper we present a general mathematical framework for G-CNNs on homogeneous spaces like Euclidean space or the sphere. We show, using elementary methods, that the layers of an equivariant network are convolutional if and only if the input and output feature spaces transform according to an induced representation. This result, which follows from G.W. Mackey's abstract theory on induced representations, establishes G-CNNs as a universal class of equivariant network architectures, and generalizes the important recent work of Kondor & Trivedi on the intertwiners between regular representations.


page 1

page 2

page 3

page 4


A General Theory of Equivariant CNNs on Homogeneous Spaces

Group equivariant convolutional neural networks (G-CNNs) have recently e...

Steerable CNNs

It has long been recognized that the invariance and equivariance propert...

General E(2)-Equivariant Steerable CNNs

The big empirical success of group equivariant networks has led in recen...

Theoretical Aspects of Group Equivariant Neural Networks

Group equivariant neural networks have been explored in the past few yea...

Geometric Deep Learning and Equivariant Neural Networks

We survey the mathematical foundations of geometric deep learning, focus...

Non-Euclidean Universal Approximation

Modifications to a neural network's input and output layers are often re...

Unified Fourier-based Kernel and Nonlinearity Design for Equivariant Networks on Homogeneous Spaces

We introduce a unified framework for group equivariant networks on homog...

Please sign up or login with your details

Forgot password? Click here to reset