On Need for Topology Awareness of Generative Models

09/07/2019
by   Uyeong Jang, et al.
20

Manifold assumption in learning states that: the data lie approximately on a manifold of much lower dimension than the input space. Generative models learn to generate data according to the underlying data distribution. Generative models are used in various tasks, such as data augmentation and generating variation of images. This paper addresses the following question: do generative models need to be aware of the topology of the underlying data manifold in which the data lie? This paper suggests that the answer is yes and demonstrates that these can have ramifications on security-critical applications, such as generative-model based defenses for adversarial examples. We provide theoretical and experimental results to support our claims.

READ FULL TEXT
research
09/07/2019

On Need for Topology-Aware Generative Models for Manifold-Based Defenses

ML algorithms or models, especially deep neural networks (DNNs), have sh...
research
05/27/2019

Universality Theorems for Generative Models

Despite the fact that generative models are extremely successful in prac...
research
06/02/2022

Score-Based Generative Models Detect Manifolds

Score-based generative models (SGMs) need to approximate the scores ∇log...
research
10/14/2022

Commutativity and Disentanglement from the Manifold Perspective

In this paper, we interpret disentanglement from the manifold perspectiv...
research
12/18/2020

Flow-based Generative Models for Learning Manifold to Manifold Mappings

Many measurements or observations in computer vision and machine learnin...
research
05/14/2019

Learning Generative Models across Incomparable Spaces

Generative Adversarial Networks have shown remarkable success in learnin...
research
01/07/2022

GenLabel: Mixup Relabeling using Generative Models

Mixup is a data augmentation method that generates new data points by mi...

Please sign up or login with your details

Forgot password? Click here to reset