Back to Search Start Over

LDNet: End-to-End Lane Marking Detection Approach Using a Dynamic Vision Sensor.

Authors :
Munir, Farzeen
Azam, Shoaib
Jeon, Moongu
Lee, Byung-Geun
Pedrycz, Witold
Source :
IEEE Transactions on Intelligent Transportation Systems; Jul2022, Vol. 23 Issue 7, p9318-9334, 17p
Publication Year :
2022

Abstract

Modern vehicles are equipped with various driver-assistance systems, including automatic lane keeping, which prevents unintended lane departures. Traditional lane detection methods incorporate handcrafted or deep learning-based features followed by postprocessing techniques for lane extraction using frame-based RGB cameras. The utilization of frame-based RGB cameras for lane detection tasks is prone to illumination variations, sun glare, and motion blur, which limits the performance of lane detection methods. Incorporating an event camera for lane detection tasks in the perception stack of autonomous driving is one of the most promising solutions for mitigating challenges encountered by frame-based RGB cameras. The main contribution of this work is the design of the lane marking detection model, which employs the dynamic vision sensor. This paper explores the novel application of lane marking detection using an event camera by designing a convolutional encoder followed by the attention-guided decoder. The spatial resolution of the encoded features is retained by a dense atrous spatial pyramid pooling (ASPP) block. The additive attention mechanism in the decoder improves performance for high dimensional input encoded features that promote lane localization and relieve postprocessing computation. The efficacy of the proposed work is evaluated using the DVS dataset for lane extraction (DET). The experimental results show a significant improvement of 5.54% and 5.03% in $F1$ scores in multiclass and binary-class lane marking detection tasks. Additionally, the intersection over union ($IoU$) scores of the proposed method surpass those of the best-performing state-of-the-art method by 6.50% and 9.37% in multiclass and binary-class tasks, respectively. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15249050
Volume :
23
Issue :
7
Database :
Complementary Index
Journal :
IEEE Transactions on Intelligent Transportation Systems
Publication Type :
Academic Journal
Accession number :
157955874
Full Text :
https://doi.org/10.1109/TITS.2021.3102479