Back to Search Start Over

Understanding Deep Neural Networks via Linear Separability of Hidden Layers

Authors :
Zhang, Chao
Chen, Xinyu
Li, Wensheng
Liu, Lixue
Wu, Wei
Tao, Dacheng
Publication Year :
2023

Abstract

In this paper, we measure the linear separability of hidden layer outputs to study the characteristics of deep neural networks. In particular, we first propose Minkowski difference based linear separability measures (MD-LSMs) to evaluate the linear separability degree of two points sets. Then, we demonstrate that there is a synchronicity between the linear separability degree of hidden layer outputs and the network training performance, i.e., if the updated weights can enhance the linear separability degree of hidden layer outputs, the updated network will achieve a better training performance, and vice versa. Moreover, we study the effect of activation function and network size (including width and depth) on the linear separability of hidden layers. Finally, we conduct the numerical experiments to validate our findings on some popular deep networks including multilayer perceptron (MLP), convolutional neural network (CNN), deep belief network (DBN), ResNet, VGGNet, AlexNet, vision transformer (ViT) and GoogLeNet.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2307.13962
Document Type :
Working Paper