Back to Search Start Over

MetaMixUp: Learning Adaptive Interpolation Policy of MixUp with Meta-Learning

Authors :
Mai, Zhijun
Hu, Guosheng
Chen, Dexiong
Shen, Fumin
Shen, Heng Tao
Publication Year :
2019
Publisher :
arXiv, 2019.

Abstract

MixUp is an effective data augmentation method to regularize deep neural networks via random linear interpolations between pairs of samples and their labels. It plays an important role in model regularization, semi-supervised learning and domain adaption. However, despite its empirical success, its deficiency of randomly mixing samples has poorly been studied. Since deep networks are capable of memorizing the entire dataset, the corrupted samples generated by vanilla MixUp with a badly chosen interpolation policy will degrade the performance of networks. To overcome the underfitting by corrupted samples, inspired by Meta-learning (learning to learn), we propose a novel technique of learning to mixup in this work, namely, MetaMixUp. Unlike the vanilla MixUp that samples interpolation policy from a predefined distribution, this paper introduces a meta-learning based online optimization approach to dynamically learn the interpolation policy in a data-adaptive way. The validation set performance via meta-learning captures the underfitting issue, which provides more information to refine interpolation policy. Furthermore, we adapt our method for pseudo-label based semisupervised learning (SSL) along with a refined pseudo-labeling strategy. In our experiments, our method achieves better performance than vanilla MixUp and its variants under supervised learning configuration. In particular, extensive experiments show that our MetaMixUp adapted SSL greatly outperforms MixUp and many state-of-the-art methods on CIFAR-10 and SVHN benchmarks under SSL configuration.

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....7875e66e8b90ac721002879081b666f2
Full Text :
https://doi.org/10.48550/arxiv.1908.10059