Security-Aware Approximate Spiking Neural Networks

by   Syed Tihaam Ahmad, et al.

Deep Neural Networks (DNNs) and Spiking Neural Networks (SNNs) are both known for their susceptibility to adversarial attacks. Therefore, researchers in the recent past have extensively studied the robustness and defense of DNNs and SNNs under adversarial attacks. Compared to accurate SNNs (AccSNN), approximate SNNs (AxSNNs) are known to be up to 4X more energy-efficient for ultra-low power applications. Unfortunately, the robustness of AxSNNs under adversarial attacks is yet unexplored. In this paper, we first extensively analyze the robustness of AxSNNs with different structural parameters and approximation levels under two gradient-based and two neuromorphic attacks. Then, we propose two novel defense methods, i.e., precision scaling and approximate quantization-aware filtering (AQF), for securing AxSNNs. We evaluated the effectiveness of these two defense methods using both static and neuromorphic datasets. Our results demonstrate that AxSNNs are more prone to adversarial attacks than AccSNNs, but precision scaling and AQF significantly improve the robustness of AxSNNs. For instance, a PGD attack on AxSNN results in a 72% accuracy loss compared to AccSNN without any attack, whereas the same attack on the precision-scaled AxSNN leads to only a 17% accuracy loss in the static MNIST dataset (4X robustness improvement). Similarly, a Sparse Attack on AxSNN leads to a 77% accuracy loss when compared to AccSNN without any attack, whereas the same attack on an AxSNN with AQF leads to only a 2% accuracy loss in the neuromorphic DVS128 Gesture dataset (38X robustness improvement).


page 1

page 4

page 5


Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?

Approximate computing is known for its effectiveness in improvising the ...

Adversarial Defense via Neural Oscillation inspired Gradient Masking

Spiking neural networks (SNNs) attract great attention due to their low ...

Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data

Deep neural networks (DNNs) have achieved excellent results in various t...

Exploring the Back Alleys: Analysing The Robustness of Alternative Neural Network Architectures against Adversarial Attacks

Recent discoveries in the field of adversarial machine learning have sho...

DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks

Spiking Neural Networks (SNNs), despite being energy-efficient when impl...

Chaos Theory and Adversarial Robustness

Neural Networks, being susceptible to adversarial attacks, should face a...

On the Robustness of Domain Adaption to Adversarial Attacks

State-of-the-art deep neural networks (DNNs) have been proved to have ex...

Please sign up or login with your details

Forgot password? Click here to reset