Back to Search Start Over

Acoustic scene classification using sparse feature learning and event-based pooling

Authors :
Ziwon Hyung
Juhan Nam
Kyogu Lee
Source :
WASPAA
Publication Year :
2013
Publisher :
IEEE, 2013.

Abstract

Recently unsupervised learning algorithms have been successfully used to represent data in many of machine recognition tasks. In particular, sparse feature learning algorithms have shown that they can not only discover meaningful structures from raw data but also outperform many hand-engineered features. In this paper, we apply the sparse feature learning approach to acoustic scene classification. We use a sparse restricted Boltzmann machine to capture manyfold local acoustic structures from audio data and represent the data in a high-dimensional sparse feature space given the learned structures. For scene classification, we summarize the local features by pooling over audio scene data. While the feature pooling is typically performed over uniformly divided segments, we suggest a new pooling method, which first detects audio events and then performs pooling only over detected events, considering the irregular occurrence of audio events in acoustic scene data. We evaluate the learned features on the IEEE AASP Challenge development set, comparing them with a baseline model using mel-frequency cepstral coefficients (MFCCs). The results show that learned features outperform MFCCs, event-based pooling achieves higher accuracy than uniform pooling and, furthermore, a combination of the two methods performs even better than either one used alone.

Details

Database :
OpenAIRE
Journal :
2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
Accession number :
edsair.doi...........ab9a3ee03668996ad64953bd67dfc69b
Full Text :
https://doi.org/10.1109/waspaa.2013.6701893