Back to Search Start Over

Drone audition listening from the sky estimates multiple sound source positions by integrating sound source localization and data association.

Authors :
Wakabayashi, Mizuho
Okuno, Hiroshi G.
Kumon, Makoto
Source :
Advanced Robotics. Jun2020, Vol. 34 Issue 11, p744-755. 12p.
Publication Year :
2020

Abstract

Drone audition, drone's auditory capabilities for a multi-rotor helicopter (hereinafter, drone), has been developed to improve real-world tasks, e.g. search-and-rescue tasks, by compensating for the weakness of visual capabilities due to darkness and occlusion. Most of current implementations of robot audition focus on a single sound source. This paper focuses on the estimation of multiple sound source positions from acoustic signals captured by a drone equipped with a microphone array. Due to ego-noise such as rotor and airflow noise around the drone, the estimation of the sound source position is obscured and prone to error. In particular, in case of multiple sound sources, data association between localization information and sound sources is critical to the performance of such estimation. To cope with uncertainty in data association, we extend Global Nearest Neighbor (GNN) to exploit sound source features (GNN-c) because drone audition needs real-time or least latency. The resulting system demonstrates that it can estimate multiple sound source positions with an accuracy of about 3 m. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01691864
Volume :
34
Issue :
11
Database :
Academic Search Index
Journal :
Advanced Robotics
Publication Type :
Academic Journal
Accession number :
144282960
Full Text :
https://doi.org/10.1080/01691864.2020.1757506