Back to Search
Start Over
Polyphonic training set synthesis improves self-supervised urban sound classification.
- Source :
-
The Journal of the Acoustical Society of America [J Acoust Soc Am] 2021 Jun; Vol. 149 (6), pp. 4309. - Publication Year :
- 2021
-
Abstract
- Machine listening systems for environmental acoustic monitoring face a shortage of expert annotations to be used as training data. To circumvent this issue, the emerging paradigm of self-supervised learning proposes to pre-train audio classifiers on a task whose ground truth is trivially available. Alternatively, training set synthesis consists in annotating a small corpus of acoustic events of interest, which are then automatically mixed at random to form a larger corpus of polyphonic scenes. Prior studies have considered these two paradigms in isolation but rarely ever in conjunction. Furthermore, the impact of data curation in training set synthesis remains unclear. To fill this gap in research, this article proposes a two-stage approach. In the self-supervised stage, we formulate a pretext task (Audio2Vec skip-gram inpainting) on unlabeled spectrograms from an acoustic sensor network. Then, in the supervised stage, we formulate a downstream task of multilabel urban sound classification on synthetic scenes. We find that training set synthesis benefits overall performance more than self-supervised learning. Interestingly, the geographical origin of the acoustic events in training set synthesis appears to have a decisive impact.
Details
- Language :
- English
- ISSN :
- 1520-8524
- Volume :
- 149
- Issue :
- 6
- Database :
- MEDLINE
- Journal :
- The Journal of the Acoustical Society of America
- Publication Type :
- Academic Journal
- Accession number :
- 34241459
- Full Text :
- https://doi.org/10.1121/10.0005277