Back to Search
Start Over
Subspace Match Probably Does Not Accurately Assess the Similarity of Learned Representations
- Publication Year :
- 2019
-
Abstract
- Learning informative representations of data is one of the primary goals of deep learning, but there is still little understanding as to what representations a neural network actually learns. To better understand this, subspace match was recently proposed as a method for assessing the similarity of the representations learned by neural networks. It has been shown that two networks with the same architecture trained from different initializations learn representations that at hidden layers show low similarity when assessed with subspace match, even when the output layers show high similarity and the networks largely exhibit similar performance on classification tasks. In this note, we present a simple example motivated by standard results in commutative algebra to illustrate how this can happen, and show that although the subspace match at a hidden layer may be 0, the representations learned may be isomorphic as vector spaces. This leads us to conclude that a subspace match comparison of learned representations may well be uninformative, and it points to the need for better methods of understanding learned representations.
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1901.00884
- Document Type :
- Working Paper