Safe Multi-Task Learning

11/20/2021
by   Pengxin Guo, et al.
0

In recent years, Multi-Task Learning (MTL) attracts much attention due to its good performance in many applications. However, many existing MTL models cannot guarantee that its performance is no worse than its single-task counterpart on each task. Though this phenomenon has been empirically observed by some works, little work aims to handle the resulting problem, which is formally defined as negative sharing in this paper. To achieve safe multi-task learning where no negative sharing occurs, we propose a Safe Multi-Task Learning (SMTL) model, which consists of a public encoder shared by all the tasks, private encoders, gates, and private decoders. Specifically, each task has a private encoder, a gate, and a private decoder, where the gate is to learn how to combine the private encoder and public encoder for the downstream private decoder. To reduce the storage cost during the inference stage, a lite version of SMTL is proposed to allow the gate to choose either the public encoder or the corresponding private encoder. Moreover, we propose a variant of SMTL to place all the gates after decoders of all the tasks. Experiments on several benchmark datasets demonstrate the effectiveness of the proposed methods.

READ FULL TEXT
research
12/13/2022

Do Text-to-Text Multi-Task Learners Suffer from Task Conflict?

Traditional multi-task learning architectures train a single model acros...
research
09/04/2019

Different Absorption from the Same Sharing: Sifted Multi-task Learning for Fake News Detection

Recently, neural networks based on multi-task learning have achieved pro...
research
08/19/2022

Curbing Task Interference using Representation Similarity-Guided Multi-Task Feature Sharing

Multi-task learning of dense prediction tasks, by sharing both the encod...
research
11/15/2022

Multi-Task Learning for Multi-User CSI Feedback

Deep learning-based massive MIMO CSI feedback has received a lot of atte...
research
12/09/2021

New Tight Relaxations of Rank Minimization for Multi-Task Learning

Multi-task learning has been observed by many researchers, which suppose...
research
05/30/2017

Joint auto-encoders: a flexible multi-task learning framework

The incorporation of prior knowledge into learning is essential in achie...
research
09/17/2021

A Role-Selected Sharing Network for Joint Machine-Human Chatting Handoff and Service Satisfaction Analysis

Chatbot is increasingly thriving in different domains, however, because ...

Please sign up or login with your details

Forgot password? Click here to reset