Back to Search
Start Over
Prototype based Masked Audio Model for Self-Supervised Learning of Sound Event Detection
- Publication Year :
- 2024
-
Abstract
- A significant challenge in sound event detection (SED) is the effective utilization of unlabeled data, given the limited availability of labeled data due to high annotation costs. Semi-supervised algorithms rely on labeled data to learn from unlabeled data, and the performance is constrained by the quality and size of the former. In this paper, we introduce the Prototype based Masked Audio Model~(PMAM) algorithm for self-supervised representation learning in SED, to better exploit unlabeled data. Specifically, semantically rich frame-level pseudo labels are constructed from a Gaussian mixture model (GMM) based prototypical distribution modeling. These pseudo labels supervise the learning of a Transformer-based masked audio model, in which binary cross-entropy loss is employed instead of the widely used InfoNCE loss, to provide independent loss contributions from different prototypes, which is important in real scenarios in which multiple labels may apply to unsupervised data frames. A final stage of fine-tuning with just a small amount of labeled data yields a very high performing SED model. On like-for-like tests using the DESED task, our method achieves a PSDS1 score of 62.5\%, surpassing current state-of-the-art models and demonstrating the superiority of the proposed technique.<br />Comment: Submitted to ICASSP2025; The code for this paper will be available at https://github.com/cai525/Transformer4SED after the paper is accepted
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2409.17656
- Document Type :
- Working Paper