Information Bottleneck Constrained Latent Bidirectional Embedding for Zero-Shot Learning

by   Yang Liu, et al.

Zero-shot learning (ZSL) aims to recognize novel classes by transferring semantic knowledge from seen classes to unseen classes. Though many ZSL methods rely on a direct mapping between the visual and the semantic space, the calibration deviation and hubness problem limit the generalization capability to unseen classes. Recently emerged generative ZSL methods generate unseen image features to transform ZSL into a supervised classification problem. However, most generative models still suffer from the seen-unseen bias problem as only seen data is used for training. To address these issues, we propose a novel bidirectional embedding based generative model with a tight visual-semantic coupling constraint. We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces. Since the embedding from high-dimensional visual features comprise much non-semantic information, the alignment of visual and semantic in latent space would inevitably been deviated. Therefore, we introduce information bottleneck (IB) constraint to ZSL for the first time to preserve essential attribute information during the mapping. Specifically, we utilize the uncertainty estimation and the wake-sleep procedure to alleviate the noises and improve model abstraction capability. We evaluate the learned latent features on four benchmark datasets. Extensive experimental results show that our method outperforms the state-of-the-art methods in different ZSL settings on most benchmark datasets. The code will be available at


page 2

page 7


Leveraging Seen and Unseen Semantic Relationships for Generative Zero-Shot Learning

Zero-shot learning (ZSL) addresses the unseen class recognition problem ...

Generalized Zero-Shot Learning via VAE-Conditioned Generative Flow

Generalized zero-shot learning (GZSL) aims to recognize both seen and un...

Zero-Shot Visual Recognition via Bidirectional Latent Embedding

Zero-shot learning for visual recognition, e.g., object and action recog...

GSMFlow: Generation Shifts Mitigating Flow for Generalized Zero-Shot Learning

Generalized Zero-Shot Learning (GZSL) aims to recognize images from both...

Zero-Shot Learning by Harnessing Adversarial Samples

Zero-Shot Learning (ZSL) aims to recognize unseen classes by generalizin...

Generalised Zero-Shot Learning with a Classifier Ensemble over Multi-Modal Embedding Spaces

Generalised zero-shot learning (GZSL) methods aim to classify previously...

Structure-Aware Feature Generation for Zero-Shot Learning

Zero-Shot Learning (ZSL) targets at recognizing unseen categories by lev...

Please sign up or login with your details

Forgot password? Click here to reset