The Brownian motion in the transformer model

07/12/2021
by   Yingshi Chen, et al.
0

Transformer is the state of the art model for many language and visual tasks. In this paper, we give a deep analysis of its multi-head self-attention (MHSA) module and find that: 1) Each token is a random variable in high dimensional feature space. 2) After layer normalization, these variables are mapped to points on the hyper-sphere. 3) The update of these tokens is a Brownian motion. The Brownian motion has special properties, its second order item should not be ignored. So we present a new second-order optimizer(an iterative K-FAC algorithm) for the MHSA module. In some short words: All tokens are mapped to high dimension hyper-sphere. The Scaled Dot-Product Attention softmax(𝐐𝐊^T/√(d)) is just the Markov transition matrix for the random walking on the sphere. And the deep learning process would learn proper kernel function to get proper positions of these tokens. The training process in the MHSA module corresponds to a Brownian motion worthy of further study.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/22/2021

So-ViT: Mind Visual Tokens for Vision Transformer

Recently the vision transformer (ViT) architecture, where the backbone p...
research
05/25/2023

Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer

Transformer architecture has shown impressive performance in multiple re...
research
09/28/2022

Motion Transformer for Unsupervised Image Animation

Image animation aims to animate a source image by using motion learned f...
research
06/06/2021

Transformer in Convolutional Neural Networks

We tackle the low-efficiency flaw of vision transformer caused by the hi...
research
08/13/2020

On the Importance of Local Information in Transformer Based Models

The self-attention module is a key component of Transformer-based models...
research
09/13/2023

Traveling Words: A Geometric Interpretation of Transformers

Transformers have significantly advanced the field of natural language p...
research
08/06/2021

Rectified Euler k-means and Beyond

Euler k-means (EulerK) first maps data onto the unit hyper-sphere surfac...

Please sign up or login with your details

Forgot password? Click here to reset