Back to Search Start Over

CNNs are Globally Optimal Given Multi-Layer Support

Authors :
Huang, Chen
Kong, Chen
Lucey, Simon
Publication Year :
2017

Abstract

Stochastic Gradient Descent (SGD) is the central workhorse for training modern CNNs. Although giving impressive empirical performance it can be slow to converge. In this paper we explore a novel strategy for training a CNN using an alternation strategy that offers substantial speedups during training. We make the following contributions: (i) replace the ReLU non-linearity within a CNN with positive hard-thresholding, (ii) reinterpret this non-linearity as a binary state vector making the entire CNN linear if the multi-layer support is known, and (iii) demonstrate that under certain conditions a global optima to the CNN can be found through local descent. We then employ a novel alternation strategy (between weights and support) for CNN training that leads to substantially faster convergence rates, nice theoretical properties, and achieving state of the art results across large scale datasets (e.g. ImageNet) as well as other standard benchmarks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1712.02501
Document Type :
Working Paper