Back to Search Start Over

Augment your batch: better training with larger batches

Authors :
Hoffer, Elad
Ben-Nun, Tal
Hubara, Itay
Giladi, Niv
Hoefler, Torsten
Soudry, Daniel
Publication Year :
2019

Abstract

Large-batch SGD is important for scaling training of deep neural networks. However, without fine-tuning hyperparameter schedules, the generalization of the model may be hampered. We propose to use batch augmentation: replicating instances of samples within the same batch with different data augmentations. Batch augmentation acts as a regularizer and an accelerator, increasing both generalization and performance scaling. We analyze the effect of batch augmentation on gradient variance and show that it empirically improves convergence for a wide variety of deep neural networks and datasets. Our results show that batch augmentation reduces the number of necessary SGD updates to achieve the same accuracy as the state-of-the-art. Overall, this simple yet effective method enables faster training and better generalization by allowing more computational resources to be used concurrently.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1901.09335
Document Type :
Working Paper