Person Re-identification in the 3D Space
People live in a 3D world. However, existing works on person re-identification (re-id) mostly consider the representation learning in a 2D space, intrinsically limiting the understanding of people. In this work, we address this limitation by exploring the prior knowledge of the 3D body structure. Specifically, we project 2D images to a 3D space and introduce a novel Omni-scale Graph Network (OG-Net) to learn the representation from sparse 3D points. With the help of 3D geometry information, we can learn a new type of deep re-id feature free from noisy variants, such as scale and viewpoint. To our knowledge, we are among the first attempts to conduct person re-identification in the 3D space. Extensive experiments show that the proposed method achieves competitive results on three popular large-scale person re-id datasets, and has good scalability to unseen datasets.
READ FULL TEXT