Right for the Right Reason: Training Agnostic Networks

by   Sen Jia, et al.

We consider the problem of a neural network being requested to classify images (or other inputs) without making implicit use of a "protected concept", that is a concept that should not play any role in the decision of the network. Typically these concepts include information such as gender or race, or other contextual information such as image backgrounds that might be implicitly reflected in unknown correlations with other variables, making it insufficient to simply remove them from the input features. In other words, making accurate predictions is not good enough if those predictions rely on information that should not be used: predictive performance is not the only important metric for learning systems. We apply a method developed in the context of domain adaptation to address this problem of "being right for the right reason", where we request a classifier to make a decision in a way that is entirely 'agnostic' to a given protected concept (e.g. gender, race, background etc.), even if this could be implicitly reflected in other attributes via unknown correlations. After defining the concept of an 'agnostic model', we demonstrate how the Domain-Adversarial Neural Network can remove unwanted information from a model using a gradient reversal layer.


page 3

page 5


Adversarial Removal of Gender from Deep Image Representations

In this work we analyze visual recognition tasks such as object and acti...

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

There is a growing body of work that proposes methods for mitigating bia...

Debiasing Convolutional Neural Networks via Meta Orthogonalization

While deep learning models often achieve strong task performance, their ...

Predicting Gender and Race from Near Infrared Iris and Periocular Images

Recent research has explored the possibility of automatically deducing i...

LEACE: Perfect linear concept erasure in closed form

Concept erasure aims to remove specified features from a representation....

FairMod - Making Predictive Models Discrimination Aware

Predictive models such as decision trees and neural networks may produce...

A statistical framework for fair predictive algorithms

Predictive modeling is increasingly being employed to assist human decis...

Please sign up or login with your details

Forgot password? Click here to reset