Adversarial Sampling for Fairness Testing in Deep Neural Network

by   Tosin Ige, et al.

In this research, we focus on the usage of adversarial sampling to test for the fairness in the prediction of deep neural network model across different classes of image in a given dataset. While several framework had been proposed to ensure robustness of machine learning model against adversarial attack, some of which includes adversarial training algorithm. There is still the pitfall that adversarial training algorithm tends to cause disparity in accuracy and robustness among different group. Our research is aimed at using adversarial sampling to test for fairness in the prediction of deep neural network model across different classes or categories of image in a given dataset. We successfully demonstrated a new method of ensuring fairness across various group of input in deep neural network classifier. We trained our neural network model on the original image, and without training our model on the perturbed or attacked image. When we feed the adversarial samplings to our model, it was able to predict the original category/ class of the image the adversarial sample belongs to. We also introduced and used the separation of concern concept from software engineering whereby there is an additional standalone filter layer that filters perturbed image by heavily removing the noise or attack before automatically passing it to the network for classification, we were able to have accuracy of 93.3 dataset, and so, in order to account for fairness, we applied our hypothesis across each categories of dataset and were able to get a consistent result and accuracy.


page 1

page 2

page 5


Improving Robust Fairness via Balance Adversarial Training

Adversarial training (AT) methods are effective against adversarial atta...

Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack

Recent development in the field of Deep Learning have exposed the underl...

Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings

Deep learning models are vulnerable to adversarial examples which are in...

Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness

Deep Neural Network (DNN) trained by the gradient descent method is know...

An Inter-observer consistent deep adversarial training for visual scanpath prediction

The visual scanpath is a sequence of points through which the human gaze...

Acceleration of the NVT-flash calculation for multicomponent mixtures using deep neural network models

Phase equilibrium calculation, also known as flash calculation, has been...

Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment

Deep Neural Networks were first developed decades ago, but it was not un...

Please sign up or login with your details

Forgot password? Click here to reset