Back to Search Start Over

Label-efficient audio classification through multitask learning and self-supervision

Authors :
Lee, Tyler
Gong, Ting
Padhy, Suchismita
Rouditchenko, Andrew
Ndirango, Anthony
Publication Year :
2019

Abstract

While deep learning has been incredibly successful in modeling tasks with large, carefully curated labeled datasets, its application to problems with limited labeled data remains a challenge. The aim of the present work is to improve the label efficiency of large neural networks operating on audio data through a combination of multitask learning and self-supervised learning on unlabeled data. We trained an end-to-end audio feature extractor based on WaveNet that feeds into simple, yet versatile task-specific neural networks. We describe several easily implemented self-supervised learning tasks that can operate on any large, unlabeled audio corpus. We demonstrate that, in scenarios with limited labeled training data, one can significantly improve the performance of three different supervised classification tasks individually by up to 6% through simultaneous training with these additional self-supervised tasks. We also show that incorporating data augmentation into our multitask setting leads to even further gains in performance.<br />Comment: Presented at ICLR 2019 Limited Labeled Data (LLD) Workshop

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1910.12587
Document Type :
Working Paper