Linear and non-linear machine learning attacks on physical unclonable functions

01/06/2023
by   Michael Lachner, et al.
0

In this thesis, several linear and non-linear machine learning attacks on optical physical unclonable functions (PUFs) are presented. To this end, a simulation of such a PUF is implemented to generate a variety of datasets that differ in several factors in order to find the best simulation setup and to study the behavior of the machine learning attacks under different circumstances. All datasets are evaluated in terms of individual samples and their correlations with each other. In the following, both linear and deep learning approaches are used to attack these PUF simulations and comprehensively investigate the impact of different factors on the datasets in terms of their security level against attackers. In addition, the differences between the two attack methods in terms of their performance are highlighted using several independent metrics. Several improvements to these models and new attacks will be introduced and investigated sequentially, with the goal of progressively improving modeling performance. This will lead to the development of an attack capable of almost perfectly predicting the outputs of the simulated PUF. In addition, data from a real optical PUF is examined and both compared to that of the simulation and used to see how the machine learning models presented would perform in the real world. The results show that all models meet the defined criterion for a successful machine learning attack.

READ FULL TEXT

page 1

page 3

page 16

page 22

page 26

page 29

page 36

page 40

research
07/15/2020

A Survey of Privacy Attacks in Machine Learning

As machine learning becomes more widely used, the need to study its impl...
research
01/28/2021

Adversarial Machine Learning Attacks on Condition-Based Maintenance Capabilities

Condition-based maintenance (CBM) strategies exploit machine learning mo...
research
10/17/2020

A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models

In recent years, machine learning algorithms have been applied widely in...
research
05/31/2022

Hide and Seek: on the Stealthiness of Attacks against Deep Learning Systems

With the growing popularity of artificial intelligence and machine learn...
research
07/17/2020

Design And Modelling An Attack on Multiplexer Based Physical Unclonable Function

This paper deals with study of the physical unclonable functions and spe...
research
06/02/2022

A New Security Boundary of Component Differentially Challenged XOR PUFs Against Machine Learning Modeling Attacks

Physical Unclonable Functions (PUFs) are promising security primitives f...
research
06/21/2020

With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Online Regression Models

With the rise of third parties in the machine learning pipeline, the ser...

Please sign up or login with your details

Forgot password? Click here to reset