Adversarial Attack on Large Scale Graph

by   Jintang Li, et al.

Recent studies have shown that graph neural networks are vulnerable against perturbations due to lack of robustness and can therefore be easily fooled. Most works on attacking the graph neural networks are currently mainly using the gradient information to guide the attack and achieve outstanding performance. Nevertheless, the high complexity of time and space makes them unmanageable for large scale graphs. We argue that the main reason is that they have to use the entire graph for attacks, resulting in the increasing time and space complexity as the data scale grows. In this work, we propose an efficient Simplified Gradient-based Attack (SGA) framework to bridge this gap. SGA can cause the graph neural networks to misclassify specific target nodes through a multi-stage optimized attack framework, which needs only a much smaller subgraph. In addition, we present a practical metric named Degree Assortativity Change (DAC) for measuring the impacts of adversarial attacks on graph data. We evaluate our attack method on four real-world datasets by attacking several commonly used graph neural networks. The experimental results show that SGA is able to achieve significant time and memory efficiency improvements while maintaining considerable performance in the attack compared to other state-of-the-art methods of attack.


Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

Graph neural networks (GNNs) which apply the deep neural networks to gra...

Scalable Attack on Graph Data by Injecting Vicious Nodes

Recent studies have shown that graph convolution networks (GCNs) are vul...

Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs

Graph neural network (GNN) with a powerful representation capability has...

Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers

Graph neural networks (GNNs) have achieved high performance in analyzing...

Simple and Efficient Partial Graph Adversarial Attack: A New Perspective

As the study of graph neural networks becomes more intensive and compreh...

Revisiting Item Promotion in GNN-based Collaborative Filtering: A Masked Targeted Topological Attack Perspective

Graph neural networks (GNN) based collaborative filtering (CF) have attr...

An Incremental Gray-box Physical Adversarial Attack on Neural Network Training

Neural networks have demonstrated remarkable success in learning and sol...

Please sign up or login with your details

Forgot password? Click here to reset