Back to Search Start Over

Towards Mitigating Architecture Overfitting in Dataset Distillation

Authors :
Zhong, Xuyang
Liu, Chen
Publication Year :
2023

Abstract

Dataset distillation methods have demonstrated remarkable performance for neural networks trained with very limited training data. However, a significant challenge arises in the form of architecture overfitting: the distilled training data synthesized by a specific network architecture (i.e., training network) generates poor performance when trained by other network architectures (i.e., test networks). This paper addresses this issue and proposes a series of approaches in both architecture designs and training schemes which can be adopted together to boost the generalization performance across different network architectures on the distilled training data. We conduct extensive experiments to demonstrate the effectiveness and generality of our methods. Particularly, across various scenarios involving different sizes of distilled data, our approaches achieve comparable or superior performance to existing methods when training on the distilled data using networks with larger capacities.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.04195
Document Type :
Working Paper