Back to Search
Start Over
LDTR: Transformer-based lane detection with anchor-chain representation.
- Source :
- Computational Visual Media; Aug2024, Vol. 10 Issue 4, p753-769, 17p
- Publication Year :
- 2024
-
Abstract
- Despite recent advances in lane detection methods, scenarios with limited- or no-visual-clue of lanes due to factors such as lighting conditions and occlusion remain challenging and crucial for automated driving. Moreover, current lane representations require complex post-processing and struggle with specific instances. Inspired by the DETR architecture, we propose LDTR, a transformer-based model to address these issues. Lanes are modeled with a novel anchor-chain, regarding a lane as a whole from the beginning, which enables LDTR to handle special lanes inherently. To enhance lane instance perception, LDTR incorporates a novel multi-referenced deformable attention module to distribute attention around the object. Additionally, LDTR incorporates two line IoU algorithms to improve convergence efficiency and employs a Gaussian heatmap auxiliary branch to enhance model representation capability during training. To evaluate lane detection models, we rely on Fréchet distance, parameterized Fl-score, and additional synthetic metrics. Experimental results demonstrate that LDTR achieves state-of-the-art performance on well-known datasets. [ABSTRACT FROM AUTHOR]
- Subjects :
- TRANSFORMER models
INCORPORATION
ALGORITHMS
ATTENTION
Subjects
Details
- Language :
- English
- ISSN :
- 20960433
- Volume :
- 10
- Issue :
- 4
- Database :
- Complementary Index
- Journal :
- Computational Visual Media
- Publication Type :
- Academic Journal
- Accession number :
- 179669185
- Full Text :
- https://doi.org/10.1007/s41095-024-0421-5