Malware Evasion Attack and Defense
Machine learning (ML) classifiers are vulnerable to adversarial examples. An adversarial example is an input sample which can be modified slightly to intentionally cause an ML classifier to misclassify it. In this work, we investigate white-box and grey-box evasion attacks to an ML-based malware detector and conducted performance evaluations in a real-world setting. We propose a framework for deploying grey-box and black-box attacks to malware detection systems. We compared the defense approaches in mitigating the attacks.
READ FULL TEXT