1. Dynamically Enhanced lane detection with multi-scale semantic feature fusion.
- Author
-
Deng, Liwei, Cao, He, and Lan, Qi
- Subjects
- *
EXTREME weather , *SPACE (Architecture) , *FEATURE extraction , *AUTONOMOUS vehicles , *WEATHER - Abstract
• Integration of ConvNeXt V2′s backbone enables extraction of critical multi-scale features from the frontal view. • The Enhanced BEV Features module facilitates feature fusion, yielding richer and more semantically informative representations. • Dynamicallyformer introduces a novel architecture for feature space transformation, employing dynamic learnable encoding and deformable attention mechanisms to achieve precise conversion from the frontal view to Bird's Eye View (BEV) via a novel representation of 3D reference points. Lane detection plays a crucial role in autonomous driving technology, and there have been many recent advancements in 3D lane detection methods. One common approach is simplifying the problem by transforming images into Bird's Eye View (BEV) space. However, existing methods still have limitations, particularly in accurately identifying lanes in many complex autonomous driving scenarios, such as slopes and extreme weather conditions. Therefore, this paper proposes a dynamically updated lane detection method called DynamicallyLane. This method utilizes ConvNeXt V2-N deformations as the backbone for feature extraction, employs dynamic learnable encoding and deformable attention mechanisms, achieves precise conversion from the frontal view to BEV through a novel representation of 3D reference points, and utilizes the Enhanced BEV Features module for feature fusion, obtaining richer and more semantically informative feature representations to facilitate the model learning process better. Experimental results demonstrate that DynamicallyLane achieves an F-score of 56.2 % on the OpenLane dataset and performs excellently on the Apollo 3D Synthetic dataset. Our code is available at https://github.com/Tafble/lane_detection. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF