Zero-Shot Visual Recognition via Bidirectional Latent Embedding

07/07/2016
by   Qian Wang, et al.
0

Zero-shot learning for visual recognition, e.g., object and action recognition, has recently attracted a lot of attention. However, it still remains challenging in bridging the semantic gap between visual features and their underlying semantics and transferring knowledge to semantic categories unseen during learning. Unlike most of the existing zero-shot visual recognition methods, we propose a stagewise bidirectional latent embedding framework to two subsequent learning stages for zero-shot visual recognition. In the bottom-up stage, a latent embedding space is first created by exploring the topological and labeling information underlying training data of known classes via a proper supervised subspace learning algorithm and the latent embedding of training data are used to form landmarks that guide embedding semantics underlying unseen classes into this learned latent space. In the top-down stage, semantic representations of unseen-class labels in a given label vocabulary are then embedded to the same latent space to preserve the semantic relatedness between all different classes via our proposed semi-supervised Sammon mapping with the guidance of landmarks. Thus, the resultant latent embedding space allows for predicting the label of a test instance with a simple nearest-neighbor rule. To evaluate the effectiveness of the proposed framework, we have conducted extensive experiments on four benchmark datasets in object and action recognition, i.e., AwA, CUB-200-2011, UCF101 and HMDB51. The experimental results under comparative studies demonstrate that our proposed approach yields the state-of-the-art performance under inductive and transductive settings.

READ FULL TEXT

page 6

page 23

page 24

research
03/27/2017

Transductive Zero-Shot Learning with a Self-training dictionary approach

As an important and challenging problem in computer vision, zero-shot le...
research
09/16/2020

Information Bottleneck Constrained Latent Bidirectional Embedding for Zero-Shot Learning

Zero-shot learning (ZSL) aims to recognize novel classes by transferring...
research
03/15/2017

Zero-Shot Recognition using Dual Visual-Semantic Mapping Paths

Zero-shot recognition aims to accurately recognize objects of unseen cla...
research
05/28/2017

Vocabulary-informed Extreme Value Learning

The novel unseen classes can be formulated as the extreme values of know...
research
07/03/2015

Ridge Regression, Hubness, and Zero-Shot Learning

This paper discusses the effect of hubness in zero-shot learning, when r...
research
03/18/2018

Discriminative Learning of Latent Features for Zero-Shot Recognition

Zero-shot learning (ZSL) aims to recognize unseen image categories by le...
research
08/30/2018

Towards Effective Deep Embedding for Zero-Shot Learning

Zero-shot learning (ZSL) attempts to recognize visual samples of unseen ...

Please sign up or login with your details

Forgot password? Click here to reset