Back to Search Start Over

What Variables Affect Out-of-Distribution Generalization in Pretrained Models?

Authors :
Harun, Md Yousuf
Lee, Kyungbok
Gallardo, Jhair
Krishnan, Giri
Kanan, Christopher
Publication Year :
2024

Abstract

Embeddings produced by pre-trained deep neural networks (DNNs) are widely used; however, their efficacy for downstream tasks can vary widely. We study the factors influencing transferability and out-of-distribution (OOD) generalization of pre-trained DNN embeddings through the lens of the tunnel effect hypothesis, which is closely related to intermediate neural collapse. This hypothesis suggests that deeper DNN layers compress representations and hinder OOD generalization. Contrary to earlier work, our experiments show this is not a universal phenomenon. We comprehensively investigate the impact of DNN architecture, training data, image resolution, and augmentations on transferability. We identify that training with high-resolution datasets containing many classes greatly reduces representation compression and improves transferability. Our results emphasize the danger of generalizing findings from toy datasets to broader contexts.<br />Comment: Accepted to NeurIPS 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.15018
Document Type :
Working Paper