Back to Search
Start Over
DMFF: dual-way multimodal feature fusion for 3D object detection.
- Source :
- Signal, Image & Video Processing; Feb2024, Vol. 18 Issue 1, p455-463, 9p
- Publication Year :
- 2024
-
Abstract
- Recently, multimodal 3D object detection that fuses the complementary information from LiDAR data and RGB images has been an active research topic. However, it is not trivial to fuse images and point clouds because of different representations of them. Inadequate feature fusion also brings bad effects on detection performance. We convert images into pseudo point clouds by using a depth completion and utilize a more efficient feature fusion method to address the problems. In this paper, we propose a dual-way multimodal feature fusion network (DMFF) for 3D object detection. Specifically, we first use a dual stream feature extraction module (DSFE) to generate homogeneous LiDAR and pseudo region of interest (RoI) features. Then, we propose a dual-way feature interaction method (DWFI) that enables intermodal and intramodal interaction of the two features. Next, we design a local attention feature fusion module (LAFF) to select which features of the input are more likely to contribute to the desired output. In addition, the proposed DMFF achieves the state-of-the-art performances on the KITTI Dataset. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 18631703
- Volume :
- 18
- Issue :
- 1
- Database :
- Complementary Index
- Journal :
- Signal, Image & Video Processing
- Publication Type :
- Academic Journal
- Accession number :
- 175023553
- Full Text :
- https://doi.org/10.1007/s11760-023-02772-z