Dynamic Gradient Balancing for Enhanced Adversarial Attacks on Multi-Task Models

by   Moshe Y. Vardi, et al.

Multi-task learning (MTL) creates a single machine learning model called multi-task model to simultaneously perform multiple tasks. Although the security of single task classifiers has been extensively studied, there are several critical security research questions for multi-task models including 1) How secure are multi-task models to single task adversarial machine learning attacks, 2) Can adversarial attacks be designed to attack multiple tasks simultaneously, and 3) Does task sharing and adversarial training increase multi-task model robustness to adversarial attacks? In this paper, we answer these questions through careful analysis and rigorous experimentation. First, we develop naïve adaptation of single-task white-box attacks and analyze their inherent drawbacks. We then propose a novel attack framework, Dynamic Gradient Balancing Attack (DGBA). Our framework poses the problem of attacking a multi-task model as an optimization problem based on averaged relative loss change, which can be solved by approximating the problem as an integer linear programming problem. Extensive evaluation on two popular MTL benchmarks, NYUv2 and Tiny-Taxonomy, demonstrates the effectiveness of DGBA compared to naïve multi-task attack baselines on both clean and adversarially trained multi-task models. The results also reveal a fundamental trade-off between improving task accuracy by sharing parameters across tasks and undermining model robustness due to increased attack transferability from parameter sharing.


page 1

page 2

page 3

page 4


Hear No Evil: Towards Adversarial Robustness of Automatic Speech Recognition via Multi-Task Learning

As automatic speech recognition (ASR) systems are now being widely deplo...

Multi-Task Variational Information Bottleneck

In this paper we propose a multi-task deep learning model called multi-t...

Curbing Task Interference using Representation Similarity-Guided Multi-Task Feature Sharing

Multi-task learning of dense prediction tasks, by sharing both the encod...

Understanding Impacts of Task Similarity on Backdoor Attack and Detection

With extensive studies on backdoor attack and detection, still fundament...

SkeleVision: Towards Adversarial Resiliency of Person Tracking with Multi-Task Learning

Person tracking using computer vision techniques has wide ranging applic...

On the tightness of linear relaxation based robustness certification methods

There has been a rapid development and interest in adversarial training ...

Scalable Attribution of Adversarial Attacks via Multi-Task Learning

Deep neural networks (DNNs) can be easily fooled by adversarial attacks ...

Please sign up or login with your details

Forgot password? Click here to reset