A Fair Loss Function for Network Pruning

11/18/2022
by   Robbie Meyer, et al.
0

Model pruning can enable the deployment of neural networks in environments with resource constraints. While pruning may have a small effect on the overall performance of the model, it can exacerbate existing biases into the model such that subsets of samples see significantly degraded performance. In this paper, we introduce the performance weighted loss function, a simple modified cross-entropy loss function that can be used to limit the introduction of biases during pruning. Experiments using biased classifiers for facial classification and skin-lesion classification tasks demonstrate that the proposed method is a simple and effective tool that can enable existing pruning methods to be used in fairness sensitive contexts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2022

FairGRAPE: Fairness-aware GRAdient Pruning mEthod for Face Attribute Classification

Existing pruning techniques preserve deep neural networks' overall abili...
research
02/11/2023

Pruning Deep Neural Networks from a Sparsity Perspective

In recent years, deep network pruning has attracted significant attentio...
research
06/07/2020

EDropout: Energy-Based Dropout and Pruning of Deep Neural Networks

Dropout is a well-known regularization method by sampling a sub-network ...
research
05/30/2023

OWAdapt: An adaptive loss function for deep learning using OWA operators

In this paper, we propose a fuzzy adaptive loss function for enhancing d...
research
03/04/2022

FairPrune: Achieving Fairness Through Pruning for Dermatological Disease Diagnosis

Many works have shown that deep learning-based medical image classificat...
research
03/21/2018

Fisher Pruning of Deep Nets for Facial Trait Classification

Although deep nets have resulted in high accuracies for various visual t...

Please sign up or login with your details

Forgot password? Click here to reset