Back to Search Start Over

Utilizing Excess Resources in Training Neural Networks

Authors :
Henig, Amit
Giryes, Raja
Publication Year :
2022

Abstract

In this work, we suggest Kernel Filtering Linear Overparameterization (KFLO), where a linear cascade of filtering layers is used during training to improve network performance in test time. We implement this cascade in a kernel filtering fashion, which prevents the trained architecture from becoming unnecessarily deeper. This also allows using our approach with almost any network architecture and let combining the filtering layers into a single layer in test time. Thus, our approach does not add computational complexity during inference. We demonstrate the advantage of KFLO on various network models and datasets in supervised learning.<br />Comment: Accepted to ICIP 2022. Code available at https://github.com/AmitHenig/KFLO

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2207.05532
Document Type :
Working Paper