Interpreting GNN-based IDS Detections Using Provenance Graph Structural Features

06/01/2023
by   Kunal Mukherjee, et al.
0

The black-box nature of complex Neural Network (NN)-based models has hindered their widespread adoption in security domains due to the lack of logical explanations and actionable follow-ups for their predictions. To enhance the transparency and accountability of Graph Neural Network (GNN) security models used in system provenance analysis, we propose PROVEXPLAINER, a framework for projecting abstract GNN decision boundaries onto interpretable feature spaces. We first replicate the decision-making process of GNNbased security models using simpler and explainable models such as Decision Trees (DTs). To maximize the accuracy and fidelity of the surrogate models, we propose novel graph structural features founded on classical graph theory and enhanced by extensive data study with security domain knowledge. Our graph structural features are closely tied to problem-space actions in the system provenance domain, which allows the detection results to be explained in descriptive, human language. PROVEXPLAINER allowed simple DT models to achieve 95 program classification tasks with general graph structural features, and 99 fidelity on malware detection tasks with a task-specific feature package tailored for direct interpretation. The explanations for malware classification are demonstrated with case studies of five real-world malware samples across three malware families.

READ FULL TEXT
research
05/26/2022

DT+GNN: A Fully Explainable Graph Neural Network using Decision Trees

We propose the fully explainable Decision Tree Graph Neural Network (DT+...
research
03/22/2023

A Comparison of Graph Neural Networks for Malware Classification

Managing the threat posed by malware requires accurate detection and cla...
research
03/05/2021

NF-GNN: Network Flow Graph Neural Networks for Malware Detection and Classification

Malicious software (malware) poses an increasing threat to the security ...
research
12/03/2021

Combining Sub-Symbolic and Symbolic Methods for Explainability

Similarly to other connectionist models, Graph Neural Networks (GNNs) la...
research
05/04/2022

Explainable Knowledge Graph Embedding: Inference Reconciliation for Knowledge Inferences Supporting Robot Actions

Learned knowledge graph representations supporting robots contain a weal...
research
07/15/2021

Algorithmic Concept-based Explainable Reasoning

Recent research on graph neural network (GNN) models successfully applie...
research
04/30/2019

To believe or not to believe: Validating explanation fidelity for dynamic malware analysis

Converting malware into images followed by vision-based deep learning al...

Please sign up or login with your details

Forgot password? Click here to reset