Back to Search
Start Over
Efficient Online Transfer Learning for Road Participants Detection in Autonomous Driving
- Source :
- IEEE Sensors Journal; October 2023, Vol. 23 Issue: 19 p23522-23535, 14p
- Publication Year :
- 2023
-
Abstract
- The spatial information provided by 3-D object detection is critical for autonomous driving. Although visual-based object detection has made significant progress in 2-D/3-D in recent years, nonvisual sensors such as 3-D light detection and ranging (LiDAR) still have inherent advantages in the accuracy (ACC) of object localization. However, the challenge still lies in the interpretability of the sparse point clouds it generates and the difficulty of being manually annotated. In this article, we propose an online transfer learning framework based on multimodal sensor systems for 3-D object detection of urban road participants including pedestrians, cyclists, and cars. The framework aims to automatically and efficiently transfer object detection capabilities from 2-D monocular camera to 3-D LiDAR through a multitarget tracker-based pipeline, enabling knowledge transfer between sensors of different modalities. Furthermore, online random forest (ORF), an inherently fast and multiclass learning method, is innovatively integrated into our system. Experiments on two very different datasets, including KITTI and Waymo, demonstrate that the proposed framework not only rapidly builds 3-D detection capabilities about road participants in a single dataset but also maintains this capability across datasets. This reveals that the proposed framework is particularly suitable for in situ deployment of unmanned vehicles and can also solve the problem of insufficient generalization ability of current offline-trained detectors.
Details
- Language :
- English
- ISSN :
- 1530437X and 15581748
- Volume :
- 23
- Issue :
- 19
- Database :
- Supplemental Index
- Journal :
- IEEE Sensors Journal
- Publication Type :
- Periodical
- Accession number :
- ejs64147398
- Full Text :
- https://doi.org/10.1109/JSEN.2023.3305592