Bridging machine learning and cryptography in defence against adversarial attacks

09/05/2018
by   Olga Taran, et al.
0

In the last decade, deep learning algorithms have become very popular thanks to the achieved performance in many machine learning and computer vision tasks. However, most of the deep learning architectures are vulnerable to so called adversarial examples. This questions the security of deep neural networks (DNN) for many security- and trust-sensitive domains. The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function.Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence. In this work, we propose a new defence mechanism based on the second Kerckhoffs's cryptographic principle which states that the defence and classification algorithm are supposed to be known, but not the key. To be compliant with the assumption that the attacker does not have access to the secret key, we will primarily focus on a gray-box scenario and do not address a white-box one. More particularly, we assume that the attacker does not have direct access to the secret block, but (a) he completely knows the system architecture, (b) he has access to the data used for training and testing and (c) he can observe the output of the classifier for each given input. We show empirically that our system is efficient against most famous state-of-the-art attacks in black-box and gray-box scenarios.

READ FULL TEXT
research
08/15/2023

A Review of Adversarial Attacks in Computer Vision

Deep neural networks have been widely used in various downstream tasks, ...
research
04/30/2021

Black-box adversarial attacks using Evolution Strategies

In the last decade, deep neural networks have proven to be very powerful...
research
04/01/2019

Defending against adversarial attacks by randomized diversification

The vulnerability of machine learning systems to adversarial attacks que...
research
12/20/2019

secml: A Python Library for Secure and Explainable Machine Learning

We present secml, an open-source Python library for secure and explainab...
research
10/26/2020

Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes

In the era of deep learning, a user often leverages a third-party machin...
research
11/02/2022

Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks

Machine-learning architectures, such as Convolutional Neural Networks (C...
research
10/09/2019

Deep Latent Defence

Deep learning methods have shown state of the art performance in a range...

Please sign up or login with your details

Forgot password? Click here to reset