Visual Conceptual Blending with Large-scale Language and Vision Models

06/27/2021
by   Songwei Ge, et al.
0

We ask the question: to what extent can recent large-scale language and image generation models blend visual concepts? Given an arbitrary object, we identify a relevant object and generate a single-sentence description of the blend of the two using a language model. We then generate a visual depiction of the blend using a text-based image generation model. Quantitative and qualitative evaluations demonstrate the superiority of language models over classical methods for conceptual blending, and of recent large-scale image generation models over prior models for the visual depiction.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset