Back to Search Start Over

Approximating the Gradient of Cross-Entropy Loss Function

Authors :
Li Li
Milos Doroslovacki
Murray H. Loew
Source :
IEEE Access, Vol 8, Pp 111626-111635 (2020)
Publication Year :
2020
Publisher :
IEEE, 2020.

Abstract

A loss function has two crucial roles in training a conventional discriminant deep neural network (DNN): (i) it measures the goodness of classification and (ii) generates the gradients that drive the training of the network. In this paper, we approximate the gradients of cross-entropy loss which is the most often used loss function in the classification DNNs. The proposed approximations are noise-free, which means they depend only on the labels of the training set. They have a fixed length to avoid the vanishing gradient problem of the cross-entropy loss. By skipping the forward pass, the computational complexities of the proposed approximations are reduced to O(n) where n is the batch size. Two claims are established based on the experiments of training DNNs using the proposed approximations: (i) It is possible to train a discriminant network without explicitly defining a loss function. (ii) The success of training does not imply the convergence of network parameters to fixed values. The experiments show that the proposed gradient approximations achieve comparable classification accuracy to the conventional loss functions and can accelerate the training process on multiple datasets.

Details

Language :
English
ISSN :
21693536
Volume :
8
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.156340beb6de4632b54d56a97050df34
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2020.3001531