Unsupervised Concatenation Hashing with Sparse Constraint for Cross-Modal Retrieval

03/26/2019
by   Jun Yu, et al.
0

With the advantage of low storage cost and high efficiency, hashing learning has received much attention in retrieval field. As multiple modal data representing a common object semantically are complementary, many works focus on learning unified binary codes. However, these works ignore the importance of manifold structre among data. In fact, it is still an interesting problem to directly preserve the local manifold structure among samples in hamming space. Since different modalities are isomerous, we adopt the concatenated feature of multiple modality feature to represent original object. In our framework, Locally Linear Embedding and Locality Preserving Projection are introduced to reconstruct the manifold structure of original space in the Hamming space. Besides, The L21-norm regularization are imposed on the projection matrices to further exploit the discriminative features for different modalities simultaneously. Extensive experiments are performed to evaluate the proposed method, dubbed Unsupervised Concatenation Hashing (UCH), on the three publicly available datasets and the experimental results show the superior performance of UCH outperforming most of state-of-the-art unsupervised hashing models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset