Back to Search
Start Over
GrOD : Deep Learning with Gradients Orthogonal Decomposition for Knowledge Transfer, Distillation, and Adversarial Training
- Source :
- ACM Transactions on Knowledge Discovery from Data. 16:1-25
- Publication Year :
- 2022
- Publisher :
- Association for Computing Machinery (ACM), 2022.
-
Abstract
- Regularization that incorporates the linear combination of empirical loss and explicit regularization terms as the loss function has been frequently used for many machine learning tasks. The explicit regularization term is designed in different types, depending on its applications. While regularized learning often boost the performance with higher accuracy and faster convergence, the regularization would sometimes hurt the empirical loss minimization and lead to poor performance. To deal with such issues in this work, we propose a novel strategy, namely Gr adients O rthogonal D ecomposition ( GrOD ), that improves the training procedure of regularized deep learning. Instead of linearly combining gradients of the two terms, GrOD re-estimates a new direction for iteration that does not hurt the empirical loss minimization while preserving the regularization affects, through orthogonal decomposition. We have performed extensive experiments to use GrOD improving the commonly used algorithms of transfer learning [ 2 ], knowledge distillation [ 3 ], and adversarial learning [ 4 ]. The experiment results based on large datasets, including Caltech 256 [ 5 ], MIT indoor 67 [ 6 ], CIFAR-10 [ 7 ], and ImageNet [ 8 ], show significant improvement made by GrOD for all three algorithms in all cases.
- Subjects :
- General Computer Science
Subjects
Details
- ISSN :
- 1556472X and 15564681
- Volume :
- 16
- Database :
- OpenAIRE
- Journal :
- ACM Transactions on Knowledge Discovery from Data
- Accession number :
- edsair.doi...........dc9469c11391e28c8481bd04f4870555