Back to Search Start Over

Deep Safe Multi-Task Learning

Authors :
Yue, Zhixiong
Ye, Feiyang
Zhang, Yu
Liang, Christy
Tsang, Ivor W.
Publication Year :
2021

Abstract

In recent years, Multi-Task Learning (MTL) has attracted much attention due to its good performance in many applications. However, many existing MTL models cannot guarantee that their performance is no worse than their single-task counterparts on each task. Though some works have empirically observed this phenomenon, little work aims to handle the resulting problem. In this paper, we formally define this phenomenon as negative sharing and define safe multi-task learning where no negative sharing occurs. To achieve safe multi-task learning, we propose a Deep Safe Multi-Task Learning (DSMTL) model with two learning strategies: individual learning and joint learning. We theoretically study the safeness of both learning strategies in the DSMTL model to show that the proposed methods can achieve some versions of safe multi-task learning. Moreover, to improve the scalability of the DSMTL model, we propose an extension, which automatically learns a compact architecture and empirically achieves safe multi-task learning. Extensive experiments on benchmark datasets verify the safeness of the proposed methods.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2111.10601
Document Type :
Working Paper