Back to Search Start Over

Data-Efficient Generation for Dataset Distillation

Authors :
Li, Zhe
Zhang, Weitong
Cechnicka, Sarah
Kainz, Bernhard
Publication Year :
2024

Abstract

While deep learning techniques have proven successful in image-related tasks, the exponentially increased data storage and computation costs become a significant challenge. Dataset distillation addresses these challenges by synthesizing only a few images for each class that encapsulate all essential information. Most current methods focus on matching. The problems lie in the synthetic images not being human-readable and the dataset performance being insufficient for downstream learning tasks. Moreover, the distillation time can quickly get out of bounds when the number of synthetic images per class increases even slightly. To address this, we train a class conditional latent diffusion model capable of generating realistic synthetic images with labels. The sampling time can be reduced to several tens of images per seconds. We demonstrate that models can be effectively trained using only a small set of synthetic images and evaluated on a large real test set. Our approach achieved rank \(1\) in The First Dataset Distillation Challenge at ECCV 2024 on the CIFAR100 and TinyImageNet datasets.<br />Comment: 13 pages, 7 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.03929
Document Type :
Working Paper