Generating unseen complex scenes: are we there yet?

by   Arantxa Casanova, et al.

Although recent complex scene conditional generation models generate increasingly appealing scenes, it is very hard to assess which models perform better and why. This is often due to models being trained to fit different data splits, and defining their own experimental setups. In this paper, we propose a methodology to compare complex scene conditional generation models, and provide an in-depth analysis that assesses the ability of each model to (1) fit the training distribution and hence perform well on seen conditionings, (2) to generalize to unseen conditionings composed of seen object combinations, and (3) generalize to unseen conditionings composed of unseen object combinations. As a result, we observe that recent methods are able to generate recognizable scenes given seen conditionings, and exploit compositionality to generalize to unseen conditionings with seen object combinations. However, all methods suffer from noticeable image quality degradation when asked to generate images from conditionings composed of unseen object combinations. Moreover, through our analysis, we identify the advantages of different pipeline components, and find that (1) encouraging compositionality through instance-wise spatial conditioning normalizations increases robustness to both types of unseen conditionings, (2) using semantically aware losses such as the scene-graph perceptual similarity helps improve some dimensions of the generation process, and (3) enhancing the quality of generated masks and the quality of the individual objects are crucial steps to improve robustness to both types of unseen conditionings.


page 6

page 17


Unconditional Scene Graph Generation

Despite recent advancements in single-domain or single-object image gene...

On the Capability of Neural Networks to Generalize to Unseen Category-Pose Combinations

Recognizing an object's category and pose lies at the heart of visual un...

Learning to Predict Novel Noun-Noun Compounds

We introduce temporally and contextually-aware models for the novel task...

Learning and generalization of compositional representations of visual scenes

Complex visual scenes that are composed of multiple objects, each with a...

Towards causal generative scene models via competition of experts

Learning how to model complex scenes in a modular way with recombinable ...

Neural Multisensory Scene Inference

For embodied agents to infer representations of the underlying 3D physic...

Searching Scenes by Abstracting Things

In this paper we propose to represent a scene as an abstraction of 'thin...

Please sign up or login with your details

Forgot password? Click here to reset