Back to Search
Start Over
MMFG: Multimodal-based Mutual Feature Gating 3D Object Detection.
- Source :
- Journal of Intelligent & Robotic Systems; Jun2024, Vol. 110 Issue 2, p1-17, 17p
- Publication Year :
- 2024
-
Abstract
- To address the problem that image and point cloud features are fused in a coarse fusion way and cannot achieve deep fusion, this paper proposes a multimodal 3D object detection architecture based on a mutual feature gating mechanism. First, since the feature aggregation approach based on the set abstraction layer cannot obtain fine-grained features, a point-based self-attention mechanism module is designed. This module is added to the extraction branch of point cloud features to achieve fine-grained feature aggregation while maintaining accurate location information. Second, a new gating mechanism is designed for the deep fusion of image and point cloud. Deep fusion is achieved by mutual feature weighting between the image and the point cloud. The newly fused features are then fed into a feature refinement network to extract classification confidence and 3D target bounding boxes. Finally, a multi-scale detection architecture is proposed to obtain a more complete object shape. The location-based encoding feature algorithm is also designed to focus the interest points in the region of interest adaptively. The whole architecture shows outstanding performance on the KITTI3D and nuSenece datasets, especially at the difficult level. It shows that the framework solves the problem of low detection rates in LiDAR mode due to the low number of surface points obtained from distant objects. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 09210296
- Volume :
- 110
- Issue :
- 2
- Database :
- Complementary Index
- Journal :
- Journal of Intelligent & Robotic Systems
- Publication Type :
- Academic Journal
- Accession number :
- 177767125
- Full Text :
- https://doi.org/10.1007/s10846-024-02119-x