Multitask Learning with Single Gradient Step Update for Task Balancing
Multitask learning is a methodology to boost generalization performance and also reduce computational intensity and memory usage. However, learning multiple tasks simultaneously can be more difficult than learning a single task because it can cause imbalance problem among multiple tasks. To address the imbalance problem, we propose an algorithm to balance between tasks at gradient-level by applying learning method of gradient-based meta-learning to multitask learning. The proposed method trains shared layer and task specific layers separately so that the two layers of different roles in multitask network can be fitted to their own purpose. Especially, the shared layer that contains knowledge shared among tasks is trained by employing single gradient step update and inner/outer loop training procedure to mitigate the imbalance problem at gradient-level. We apply the proposed method to various multitask computer vision problems and achieve state of the art performance.
READ FULL TEXT