MoNet: Moments Embedding Network
Bilinear pooling has been recently proposed as a feature encoding layer, which can be used after the convolutional layers of a deep network, to improve performance in multiple vision tasks. Instead of conventional global average pooling or fully connected layer, bilinear pooling gathers 2nd order information in a translation invariant fashion. However, a serious drawback of this family of pooling layers is their dimensionality explosion. Approximate pooling methods with compact property have been explored towards resolving this weakness. Additionally, recent results have shown that significant performance gains can be achieved by using matrix normalization to regularize unstable higher order information. However, combining compact pooling with matrix normalization has not been explored until now. In this paper, we unify the bilinear pooling layer and the global Gaussian embedding layer through the empirical moment matrix. In addition, with a proposed novel sub-matrix square-root layer, one can normalize the output of the convolution layer directly and mitigate the dimensionality problem with off-the-shelf compact pooling methods. Our experiments on three widely used fine-grained classification datasets illustrate that our proposed architecture MoNet can achieve similar or better performance than G2DeNet . When combined with compact pooling technique, it obtains comparable performance with the encoded feature of 96
READ FULL TEXT