Back to Search Start Over

New Interpretations of Normalization Methods in Deep Learning

Authors :
Sun, Jiacheng
Cao, Xiangyong
Liang, Hanwen
Huang, Weiran
Chen, Zewei
Li, Zhenguo
Publication Year :
2020

Abstract

In recent years, a variety of normalization methods have been proposed to help train neural networks, such as batch normalization (BN), layer normalization (LN), weight normalization (WN), group normalization (GN), etc. However, mathematical tools to analyze all these normalization methods are lacking. In this paper, we first propose a lemma to define some necessary tools. Then, we use these tools to make a deep analysis on popular normalization methods and obtain the following conclusions: 1) Most of the normalization methods can be interpreted in a unified framework, namely normalizing pre-activations or weights onto a sphere; 2) Since most of the existing normalization methods are scaling invariant, we can conduct optimization on a sphere with scaling symmetry removed, which can help stabilize the training of network; 3) We prove that training with these normalization methods can make the norm of weights increase, which could cause adversarial vulnerability as it amplifies the attack. Finally, a series of experiments are conducted to verify these claims.<br />Comment: Accepted by AAAI 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2006.09104
Document Type :
Working Paper