Back to Search Start Over

Deep Domain-Adversarial Image Generation for Domain Generalisation

Authors :
Zhou, Kaiyang
Yang, Yongxin
Hospedales, Timothy
Xiang, Tao
Publication Year :
2020

Abstract

Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution. To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains. In this paper, we propose a novel DG approach based on \emph{Deep Domain-Adversarial Image Generation} (DDAIG). Specifically, DDAIG consists of three components, namely a label classifier, a domain classifier and a domain transformation network (DoTNet). The goal for DoTNet is to map the source training data to unseen domains. This is achieved by having a learning objective formulated to ensure that the generated data can be correctly classified by the label classifier while fooling the domain classifier. By augmenting the source training data with the generated unseen domain data, we can make the label classifier more robust to unknown domain changes. Extensive experiments on four DG datasets demonstrate the effectiveness of our approach.<br />Comment: 8 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2003.06054
Document Type :
Working Paper