Achieving Fairness at No Utility Cost via Data Reweighing

02/01/2022
by   Peizhao Li, et al.
0

With the fast development of algorithmic governance, fairness has become a compulsory property for machine learning models to suppress unintentional discrimination. In this paper, we focus on the pre-processing aspect for achieving fairness, and propose a data reweighing approach that only adjusts the weight for samples in the training phase. Different from most previous reweighing methods which assign a uniform weight for each (sub)group, we granularly model the influence from each training sample with regard to fairness and predictive utility, and compute individual weights based on the influence with constraints of both fairness and utility. Experimental results reveal that previous methods achieve fairness at a non-negligible cost of utility, while as a significant advantage, our approach can empirically release the tradeoff and obtain cost-free fairness. We demonstrate the cost-free fairness through vanilla classifiers and standard training processes on different fairness notions, compared to baseline methods on multiple tabular datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset