Drop the GAN: In Defense of Patches Nearest Neighbors as Single Image Generative Models

by   Niv Granot, et al.

Single image generative models perform synthesis and manipulation tasks by capturing the distribution of patches within a single image. The classical (pre Deep Learning) prevailing approaches for these tasks are based on an optimization process that maximizes patch similarity between the input and generated output. Recently, however, Single Image GANs were introduced both as a superior solution for such manipulation tasks, but also for remarkable novel generative tasks. Despite their impressiveness, single image GANs require long training time (usually hours) for each image and each task. They often suffer from artifacts and are prone to optimization issues such as mode collapse. In this paper, we show that all of these tasks can be performed without any training, within several seconds, in a unified, surprisingly simple framework. We revisit and cast the "good-old" patch-based methods into a novel optimization-free framework. We start with an initial coarse guess, and then simply refine the details coarse-to-fine using patch-nearest-neighbor search. This allows generating random novel images better and much faster than GANs. We further demonstrate a wide range of applications, such as image editing and reshuffling, retargeting to different sizes, structural analogies, image collage and a newly introduced task of conditional inpainting. Not only is our method faster (× 10^3-× 10^4 than a GAN), it produces superior results (confirmed by quantitative and qualitative evaluation), less artifacts and more realistic global structure than any of the previous approaches (whether GAN-based or classical patch-based).


page 1

page 2

page 3

page 4

page 6

page 7

page 8


Diverse Generation from a Single Video Made Possible

Most advanced video generation and manipulation methods train on a large...

Generating natural images with direct Patch Distributions Matching

Many traditional computer vision algorithms generate realistic images by...

Diverse Video Generation from a Single Video

GANs are able to perform generation and manipulation tasks, trained on a...

Training End-to-end Single Image Generators without GANs

We present AugurOne, a novel approach for training single image generati...

PetsGAN: Rethinking Priors for Single Image Generation

Single image generation (SIG), described as generating diverse samples t...

ExSinGAN: Learning an Explainable Generative Model from a Single Image

Generating images from a single sample, as a newly developing branch of ...

Improved Techniques for Training Single-Image GANs

Recently there has been an interest in the potential of learning generat...

Please sign up or login with your details

Forgot password? Click here to reset