Back to Search Start Over

Learning spectro-temporal representations of complex sounds with parameterized neural networks.

Authors :
Riad R
Karadayi J
Bachoud-Lévi AC
Dupoux E
Source :
The Journal of the Acoustical Society of America [J Acoust Soc Am] 2021 Jul; Vol. 150 (1), pp. 353.
Publication Year :
2021

Abstract

Deep learning models have become potential candidates for auditory neuroscience research, thanks to their recent successes in a variety of auditory tasks, yet these models often lack interpretability to fully understand the exact computations that have been performed. Here, we proposed a parametrized neural network layer, which computes specific spectro-temporal modulations based on Gabor filters [learnable spectro-temporal filters (STRFs)] and is fully interpretable. We evaluated this layer on speech activity detection, speaker verification, urban sound classification, and zebra finch call type classification. We found that models based on learnable STRFs are on par for all tasks with state-of-the-art and obtain the best performance for speech activity detection. As this layer remains a Gabor filter, it is fully interpretable. Thus, we used quantitative measures to describe distribution of the learned spectro-temporal modulations. Filters adapted to each task and focused mostly on low temporal and spectral modulations. The analyses show that the filters learned on human speech have similar spectro-temporal parameters as the ones measured directly in the human auditory cortex. Finally, we observed that the tasks organized in a meaningful way: the human vocalization tasks closer to each other and bird vocalizations far away from human vocalizations and urban sounds tasks.

Details

Language :
English
ISSN :
1520-8524
Volume :
150
Issue :
1
Database :
MEDLINE
Journal :
The Journal of the Acoustical Society of America
Publication Type :
Academic Journal
Accession number :
34340514
Full Text :
https://doi.org/10.1121/10.0005482