Invariant Representations with Stochastically Quantized Neural Networks

08/04/2022
by   Mattia Cerrato, et al.
0

Representation learning algorithms offer the opportunity to learn invariant representations of the input data with regard to nuisance factors. Many authors have leveraged such strategies to learn fair representations, i.e., vectors where information about sensitive attributes is removed. These methods are attractive as they may be interpreted as minimizing the mutual information between a neural layer's activations and a sensitive attribute. However, the theoretical grounding of such methods relies either on the computation of infinitely accurate adversaries or on minimizing a variational upper bound of a mutual information estimate. In this paper, we propose a methodology for direct computation of the mutual information between a neural layer and a sensitive attribute. We employ stochastically-activated binary neural networks, which lets us treat neurons as random variables. We are then able to compute (not bound) the mutual information between a layer and a sensitive attribute and use this information as a regularization factor during gradient descent. We show that this method compares favorably with the state of the art in fair representation learning and that the learned representations display a higher level of invariance compared to full-precision neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/01/2022

Semi-FairVAE: Semi-supervised Fair Representation Learning with Adversarial Variational Autoencoder

Adversarial learning is a widely used technique in fair representation l...
research
07/07/2020

README: REpresentation learning by fairness-Aware Disentangling MEthod

Fair representation learning aims to encode invariant representation wit...
research
02/25/2020

A Theory of Usable Information Under Computational Constraints

We propose a new framework for reasoning about information in complex sy...
research
05/06/2021

A Novel Estimator of Mutual Information for Learning to Disentangle Textual Representations

Learning disentangled representations of textual data is essential for m...
research
02/21/2023

Scalable Infomin Learning

The task of infomin learning aims to learn a representation with high ut...
research
08/01/2022

De-biased Representation Learning for Fairness with Unreliable Labels

Removing bias while keeping all task-relevant information is challenging...
research
06/26/2018

Hierarchical VampPrior Variational Fair Auto-Encoder

Decision making is a process that is extremely prone to different biases...

Please sign up or login with your details

Forgot password? Click here to reset