Back to Search Start Over

DeFT-Mamba: Universal Multichannel Sound Separation and Polyphonic Audio Classification

Authors :
Lee, Dongheon
Choi, Jung-Woo
Publication Year :
2024

Abstract

This paper presents a framework for universal sound separation and polyphonic audio classification, addressing the challenges of separating and classifying individual sound sources in a multichannel mixture. The proposed framework, DeFT-Mamba, utilizes the dense frequency-time attentive network (DeFTAN) combined with Mamba to extract sound objects, capturing the local time-frequency relations through gated convolution block and the global time-frequency relations through position-wise Hybrid Mamba. DeFT-Mamba surpasses existing separation and classification networks by a large margin, particularly in complex scenarios involving in-class polyphony. Additionally, a classification-based source counting method is introduced to identify the presence of multiple sources, outperforming conventional threshold-based approaches. Separation refinement tuning is also proposed to improve performance further. The proposed framework is trained and tested on a multichannel universal sound separation dataset developed in this work, designed to mimic realistic environments with moving sources and varying onsets and offsets of polyphonic events.<br />Comment: 5 pages, 2 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.12413
Document Type :
Working Paper