Adversarial Transferability in Wearable Sensor Systems

03/17/2020
by   Ramesh Kumar Sah, et al.
0

Machine learning has increasingly become the most used approach for inference and decision making in wearable sensor systems. However, recent studies have found that machine learning systems are easily fooled by the addition of adversarial perturbation to their inputs. What is more interesting is that the adversarial examples generated for one machine learning system can also degrade the performance of another. This property of adversarial examples is called transferability. In this work, we take the first strides in studying adversarial transferability in wearable sensor systems, from the following perspectives: 1) Transferability between machine learning models, 2) Transferability across subjects, 3) Transferability across sensor locations, and 4) Transferability across datasets. With Human Activity Recognition (HAR) as an example sensor system, we found strong untargeted transferability in all cases of transferability. Specifically, gradient-based attacks were able to achieve higher misclassification rates compared to non-gradient attacks. The misclassification rate of untargeted adversarial examples ranged from 20 98 rate of adversarial examples was 100 the success rate for other types of targeted transferability ranged from 20 0 consequences not only in sensor systems but also across the broad spectrum of ubiquitous computing.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset