Can We Leverage Predictive Uncertainty to Detect Dataset Shift and Adversarial Examples in Android Malware Detection?

09/20/2021
by   Deqiang Li, et al.
0

The deep learning approach to detecting malicious software (malware) is promising but has yet to tackle the problem of dataset shift, namely that the joint distribution of examples and their labels associated with the test set is different from that of the training set. This problem causes the degradation of deep learning models without users' notice. In order to alleviate the problem, one approach is to let a classifier not only predict the label on a given example but also present its uncertainty (or confidence) on the predicted label, whereby a defender can decide whether to use the predicted label or not. While intuitive and clearly important, the capabilities and limitations of this approach have not been well understood. In this paper, we conduct an empirical study to evaluate the quality of predictive uncertainties of malware detectors. Specifically, we re-design and build 24 Android malware detectors (by transforming four off-the-shelf detectors with six calibration methods) and quantify their uncertainties with nine metrics, including three metrics dealing with data imbalance. Our main findings are: (i) predictive uncertainty indeed helps achieve reliable malware detection in the presence of dataset shift, but cannot cope with adversarial evasion attacks; (ii) approximate Bayesian methods are promising to calibrate and generalize malware detectors to deal with dataset shift, but cannot cope with adversarial evasion attacks; (iii) adversarial evasion attacks can render calibration methods useless, and it is an open problem to quantify the uncertainty associated with the predicted labels of adversarial examples (i.e., it is not effective to use predictive uncertainty to detect adversarial examples).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2018

Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection

Machine learning based solutions have been successfully employed for aut...
research
01/09/2018

Adversarial Deep Learning for Robust Detection of Binary Encoded Malware

Malware is constantly adapting in order to avoid detection. Model based ...
research
08/17/2023

Towards a Practical Defense against Adversarial Attacks on Deep Learning-based Malware Detectors via Randomized Smoothing

Malware detectors based on deep learning (DL) have been shown to be susc...
research
05/30/2022

Domain Constraints in Feature Space: Strengthening Robustness of Android Malware Detection against Realizable Adversarial Examples

Strengthening the robustness of machine learning-based malware detectors...
research
04/20/2019

Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach

The widespread adoption of smartphones dramatically increases the risk o...
research
10/18/2018

Exploring Adversarial Examples in Malware Detection

The Convolutional Neural Network (CNN) architecture is increasingly bein...
research
02/07/2018

Leveraging Uncertainty for Effective Malware Mitigation

A promising avenue for improving the effectiveness of behavioral-based m...

Please sign up or login with your details

Forgot password? Click here to reset