1. A benchmark approach and dataset for large-scale lane mapping from MLS point clouds
- Author
-
Xiaoxin Mi, Zhen Dong, Zhipeng Cao, Bisheng Yang, Zhen Cao, Chao Zheng, Jantien Stoter, and Liangliang Nan
- Subjects
Large-scale lane mapping ,Point clouds ,End-to-end ,Neural network ,Hierarchical attention ,Physical geography ,GB3-5030 ,Environmental sciences ,GE1-350 - Abstract
Accurate lane maps with semantics are crucial for various applications, such as high-definition maps (HD Maps), intelligent transportation systems (ITS), and digital twins. Manual annotation of lanes is labor-intensive and costly, prompting researchers to explore automatic lane extraction methods. This paper presents an end-to-end large-scale lane mapping method that considers both lane geometry and semantics. This study represents lane markings as polylines with uniformly sampled points and associated semantics, allowing for adaptation to varying lane shapes. Additionally, we propose an end-to-end network to extract lane polylines from mobile laser scanning (MLS) data, enabling the inference of vectorized lane instances without complex post-processing. The network consists of three components: a feature encoder, a column proposal generator, and a lane information decoder. The feature encoder encodes textual and structural information of lane markings to enhance the method’s robustness to data imperfections, such as varying lane intensity, uneven point density, and occlusion-induced incomplete data. The column proposal generator generates regions of interest for the subsequent decoder. Leveraging the embedded multi-scale features from the feature encoder, the lane decoder effectively predicts lane polylines and their associated semantics without requiring step-by-step conditional inference. Comprehensive experiments conducted on three lane datasets have demonstrated the performance of the proposed method, even in the presence of incomplete data and complex lane topology. Furthermore, the datasets used in this work, including source ground points, generated bird’s eye view (BEV) images, and annotations, will be publicly available with the publication of the paper. The code and dataset will be accessible through here.
- Published
- 2024
- Full Text
- View/download PDF