Back to Search
Start Over
MixStyle Neural Networks for Domain Generalization and Adaptation
- Publication Year :
- 2021
- Publisher :
- arXiv, 2021.
-
Abstract
- Convolutional neural networks (CNNs) often have poor generalization performance under domain shift. One way to improve domain generalization is to collect diverse source data from multiple relevant domains so that a CNN model is allowed to learn more domain-invariant, and hence generalizable representations. In this work, we address domain generalization with MixStyle, a plug-and-play, parameter-free module that is simply inserted to shallow CNN layers and requires no modification to training objectives. Specifically, MixStyle probabilistically mixes feature statistics between instances. This idea is inspired by the observation that visual domains can often be characterized by image styles which are in turn encapsulated within instance-level feature statistics in shallow CNN layers. Therefore, inserting MixStyle modules in effect synthesizes novel domains albeit in an implicit way. MixStyle is not only simple and flexible, but also versatile -- it can be used for problems whereby unlabeled images are available, such as semi-supervised domain generalization and unsupervised domain adaptation, with a simple extension to mix feature statistics between labeled and pseudo-labeled instances. We demonstrate through extensive experiments that MixStyle can significantly boost the out-of-distribution generalization performance across a wide range of tasks including object recognition, instance retrieval, and reinforcement learning.<br />Comment: Extension of https://openreview.net/forum?id=6xHJ37MVxxp. Code available at https://github.com/KaiyangZhou/mixstyle-release
Details
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....ffe2329fa8e8aa80c188b3ca643a23a7
- Full Text :
- https://doi.org/10.48550/arxiv.2107.02053