Hierarchical Joint Scene Coordinate Classification and Regression for Visual Localization
Visual localization is pivotal to many applications in computer vision and robotics. To address single-image RGB localization, state-of-the-art feature based methods solve the task by matching local descriptors between a query image and a pre-built 3D model. Recently, deep neural networks have been exploited to directly learn the mapping between raw pixels and 3D coordinates in the scene, and thus the matching is implicitly performed by the forward pass through the network. In this work, we present a new hierarchical joint classification-regression network to predict pixel scene coordinates in a coarse-to-fine manner from a single RGB image. The network consists of a series of output layers with each of them conditioned on the outputs of previous ones, where the final output layer regresses the coordinates and the others produce coarse location labels. Our experiments show that the proposed method outperforms the vanilla scene coordinate regression network and is more scalable to large environments. With data augmentation, it achieves the state-of-the-art single-image RGB localization performance on three benchmark datasets.
READ FULL TEXT