1,582 results on '"lane detection"'
Search Results
2. IMO-Net: Integrated Memory Optimization Network for Video Instance Lane Detection
- Author
-
Liu, Boyong, Yin, Yunfei, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lin, Zhouchen, editor, Cheng, Ming-Ming, editor, He, Ran, editor, Ubul, Kurban, editor, Silamu, Wushouer, editor, Zha, Hongbin, editor, Zhou, Jie, editor, and Liu, Cheng-Lin, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Evaluating the Performance of YOLOP for Lane Detection with Challenging Road Conditions
- Author
-
Shehada, Dina, Bouridane, Ahmed, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Yang, Xin-She, editor, Sherratt, R. Simon, editor, Dey, Nilanjan, editor, and Joshi, Amit, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Refinecurvelane: lane detection with B-spline curve in a layer-by-layer refinement manner.
- Author
-
Tian, Wei, Han, Yi, Huang, Yuyao, and Yu, Xianwang
- Abstract
Lane detection with front-view RGB images has been a long-standing challenge. Among the various methods, curve-based approaches are known for their fast speed, conciseness, and ability to handle occlusions. However, these methods often suffer from a relative low accuracy, attributing to the inflexibility of adopted curve model, the inefficient lane feature extraction, and a rigid curve regression supervision. In this paper, we propose a novel curve-based lane detection method that addresses these limitations. The lane lines are modeled with B-splines, which provide greater flexibility. Explicit spatial attention maps are used to guide the network in extracting relevant lane features from the image. Additionally, a layer-by-layer refinement process is employed to improve the lane predictions. Importantly, the ground truth of spatial attention maps also serve as pixel-level supervision for the lane instances. We evaluate the proposed method on four widely used lane detection datasets and demonstrate the state-of-the-art performance achieved among curve-based approaches on CULane and LLAMAS dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Lane extraction from trajectories at road intersections based on Graph Transformer Network.
- Author
-
Wan, Chongshan, Yue, Peng, Yang, Can, Cai, Chuanwei, and Liu, Xiaoxue
- Subjects
- *
INTELLIGENT transportation systems , *TRANSFORMER models , *FEATURE extraction , *MOTOR vehicle driving - Abstract
AbstractLane-level road networks are crucial components of high-precision maps and play a significant role in intelligent transportation systems. Extracting lane-level road networks at intersections presents considerable challenges due to the complex structures of intersections and diverse driving behaviors. A graph-learning based method is proposed for extracting lanes from high-precision trajectories at road intersections. A trajectory relation graph is designed to encode the directional, shape, and distance features of trajectories, capturing both the intrinsic and extrinsic relationships between trajectories. Subsequently, a Graph Transformer Network is developed to extract a representative subset of trajectories as lanes. To alleviate the problem of generating missing and extraneous lanes, a set-based lane extraction loss is introduced to achieve implicit pruning of redundancy through the attention mechanism. Comprehensive experimental results demonstrate that the proposed method outperforms state-of-the-art methods in three positional and topological accuracy metrics. The method achieves lane extraction with minimal omissions and redundancies and exhibits strong performance in complex scenarios such as U-turns, lane merging, and lane diverging regions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Autonomous Multitask Driving Systems Using Improved You Only Look Once Based on Panoptic Driving Perception.
- Author
-
Chun-Jung Lin, Cheng-Jian Lin, and Yi-Chen Yang
- Subjects
TAGUCHI methods ,DATABASES ,AUTONOMOUS vehicles - Abstract
With the continuous development of science and technology, automatic assisted driving is becoming a trend that cannot be ignored. The You Only Look Once (YOLO) model is usually used to detect roads and drivable areas. Since YOLO is often used for a single task and its parameter combination is difficult to obtain, we propose a Taguchi-based YOLO for panoptic driving perception (T-YOLOP) model to improve the accuracy and computing speed of the model in deteching drivable areas and lanes, making it a more practical panoptic driving perception system. In the T-YOLOP model, the Taguchi method is used to determine the appropriate parameter combination. Our experiments use the BDD100K database to verify the performance of the proposed T-YOLOP model. Experimental results show that the accuracies of the proposed T-YOLOP model in deteching drivable areas and lanes are 97.9 and 73.9%, respectively, and these results are better than those of the traditional YOLOP model. Therefore, the proposed T-YOLOP model successfully provides a more reliable solution for the application of panoramic driving perception systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. LuminanceGAN: Controlling the brightness of generated images for various night conditions.
- Author
-
Seo, Junghyun, Wang, Sungjun, Jeon, Hyeonjae, Kim, Taesoo, Jin, Yongsik, Kwon, Soon, Kim, Jeseok, and Lim, Yongseob
- Subjects
- *
IMAGE denoising , *DATA augmentation , *AUTONOMOUS vehicles - Abstract
There are diverse datasets available for training deep learning models utilized in autonomous driving. However, most of these datasets are composed of images obtained in day conditions, leading to a data imbalance issue when dealing with night condition images. Several day-to-night image translation models have been proposed to resolve the insufficiency of the night condition dataset, but these models often generate artifacts and cannot control the brightness of the generated image. In this study, we propose a LuminanceGAN, for controlling the brightness degree in night conditions to generate realistic night image outputs. The proposed novel Y-control loss converges the brightness degree of the output image to a specific luminance value. Furthermore, the implementation of the self-attention module effectively reduces artifacts in the generated images. Consequently, in qualitative comparisons, our model demonstrates superior performance in day-to-night image translation. Additionally, a quantitative evaluation was conducted using lane detection models, showing that our proposed method improves performance in night lane detection tasks. Moreover, the quality of the generated indoor dark images was assessed using an evaluation metric. It can be proven that our model generates images most similar to real dark images compared to other image translation models. [Display omitted] • Our novel Y-control loss is proposed to control the brightness degree of the generated night condition images. • We reduced the artifacts in the generated night condition images by incorporating a self-attention module. • Addressing data imbalance issues with varied brightness night images generated by our proposed model. • Our model exhibits a general applicability across various domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Effective lane detection on complex roads with convolutional attention mechanism in autonomous vehicles
- Author
-
Vinay Maddiralla and Sumathy Subramanian
- Subjects
Attention mechanism ,Object detection ,Sharpening filter ,Autonomous vehicles ,Convolutional neural network ,Lane detection ,Medicine ,Science - Abstract
Abstract Autonomous Vehicles (AV’s) have achieved more popularity in vehicular technology in recent years. For the development of secure and safe driving, these AV’s help to reduce the uncertainties such as crashes, heavy traffic, pedestrian behaviours, random objects, lane detection, different types of roads and their surrounding environments. In AV’s, Lane Detection is one of the most important aspects which helps in lane holding guidance and lane departure warning. From Literature, it is observed that existing deep learning models perform better on well maintained roads and in favourable weather conditions. However, performance in extreme weather conditions and curvy roads need focus. The proposed work focuses on presenting an accurate lane detection approach on poor roads, particularly those with curves, broken lanes, or no lane markings and extreme weather conditions. Lane Detection with Convolutional Attention Mechanism (LD-CAM) model is proposed to achieve this outcome. The proposed method comprises an encoder, an enhanced convolution block attention module (E-CBAM), and a decoder. The encoder unit extracts the input image features, while the E-CBAM focuses on quality of feature maps in input images extracted from the encoder, and the decoder provides output without loss of any information in the original image. The work is carried out using the distinct data from three datasets called Tusimple for different weather condition images, Curve Lanes for different curve lanes images and Cracks and Potholes for damaged road images. The proposed model trained using these datasets showcased an improved performance attaining an Accuracy of 97.90%, Precision of 98.92%, F1-Score of 97.90%, IoU of 98.50% and Dice Co-efficient as 98.80% on both structured and defective roads in extreme weather conditions.
- Published
- 2024
- Full Text
- View/download PDF
9. Lane detection networks based on deep neural networks and temporal information
- Author
-
Huei-Yung Lin, Chun-Ke Chang, and Van Luan Tran
- Subjects
Advanced driver assistance system ,Lane detection ,Convolutional neural network ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
In the past few years, the lane detection technique has become a key factor for autonomous driving systems and self-driving cars on the road. Among the various vehicle subsystems, the lane detection module is one of the essential parts of the Advanced Driver Assistance System (ADAS). Conventional lane detection approaches use machine vision algorithms to find straight lines in road scene images. However, it is challenging to identify straight or curved lane markings in complex environments. To deal with this problem, this paper presents a lane detection technique based on deep learning. It is combined with a 3D convolutional network, so the temporal information is added to the network architecture. Using the front camera images, the system can immediately detect the lane marking information ahead. Moreover, we propose an approach for improvement by adding the time axis to the network architecture. In addition to using 3D-ResNet50, the temporal convolution and spatial convolution are separated for processing. The accuracy is 91.34% improved after adding time and space split convolution with LeakyReLU. The experiments carried out using real scene images have demonstrated the feasibility of the proposed technique for applications to various complex scenes.
- Published
- 2024
- Full Text
- View/download PDF
10. Contrastive Learning for Lane Detection via cross-similarity.
- Author
-
Zoljodi, Ali, Abadijou, Sadegh, Alibeigi, Mina, and Daneshtalab, Masoud
- Subjects
- *
CONVOLUTIONAL neural networks - Published
- 2024
- Full Text
- View/download PDF
11. Lane Attribute Classification Based on Fine-Grained Description.
- Author
-
He, Zhonghe, Gong, Pengfei, Ye, Hongcheng, and Gan, Zizheng
- Subjects
- *
TRAFFIC monitoring , *ROAD markings , *PROBLEM solving , *ANNOTATIONS , *ALGORITHMS , *INTELLIGENT transportation systems - Abstract
As an indispensable part of the vehicle environment perception task, road traffic marking detection plays a vital role in correctly understanding the current traffic situation. However, the existing traffic marking detection algorithms still have some limitations. Taking lane detection as an example, the current detection methods mainly focus on the location information detection of lane lines, and they only judge the overall attribute of each detected lane line instance, thus lacking more fine-grained dynamic detection of lane line attributes. In order to meet the needs of intelligent vehicles for the dynamic attribute detection of lane lines and more perfect road environment information in urban road environment, this paper constructs a fine-grained attribute detection method for lane lines, which uses pixel-level attribute sequence points to describe the complete attribute distribution of lane lines and then matches the detection results of the lane lines. Realizing the attribute judgment of different segment positions of lane instances is called the fine-grained attribute detection of lane lines (Lane-FGA). In addition, in view of the lack of annotation information in the current open-source lane data set, this paper constructs a lane data set with both lane instance information and fine-grained attribute information by combining manual annotation and intelligent annotation. At the same time, a cyclic iterative attribute inference algorithm is designed to solve the difficult problem of lane attribute labeling in areas without visual cues such as occlusion and damage. In the end, the average accuracy of the proposed algorithm reaches 97% on various types of lane attribute detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. LDTR: Transformer-based lane detection with anchor-chain representation.
- Author
-
Yang, Zhongyu, Shen, Chen, Shao, Wei, Xing, Tengfei, Hu, Runbo, Xu, Pengfei, Chai, Hua, and Xue, Ruini
- Subjects
TRANSFORMER models ,INCORPORATION ,ALGORITHMS ,ATTENTION - Abstract
Despite recent advances in lane detection methods, scenarios with limited- or no-visual-clue of lanes due to factors such as lighting conditions and occlusion remain challenging and crucial for automated driving. Moreover, current lane representations require complex post-processing and struggle with specific instances. Inspired by the DETR architecture, we propose LDTR, a transformer-based model to address these issues. Lanes are modeled with a novel anchor-chain, regarding a lane as a whole from the beginning, which enables LDTR to handle special lanes inherently. To enhance lane instance perception, LDTR incorporates a novel multi-referenced deformable attention module to distribute attention around the object. Additionally, LDTR incorporates two line IoU algorithms to improve convergence efficiency and employs a Gaussian heatmap auxiliary branch to enhance model representation capability during training. To evaluate lane detection models, we rely on Fréchet distance, parameterized Fl-score, and additional synthetic metrics. Experimental results demonstrate that LDTR achieves state-of-the-art performance on well-known datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. GDMNet: A Unified Multi-Task Network for Panoptic Driving Perception.
- Author
-
Liu, Yunxiang, Ma, Haili, Zhu, Jianlin, and Zhang, Qiangbo
- Subjects
OBJECT recognition (Computer vision) ,FEATURE extraction ,GEOGRAPHICAL perception ,ALGORITHMS - Abstract
To enhance the efficiency and accuracy of environmental perception for autonomous vehicles, we propose GDMNet, a unified multi-task perception network for autonomous driving, capable of performing drivable area segmentation, lane detection, and traffic object detection. Firstly, in the encoding stage, features are extracted, and Generalized Efficient Layer Aggregation Network (GELAN) is utilized to enhance feature extraction and gradient flow. Secondly, in the decoding stage, specialized detection heads are designed; the drivable area segmentation head employs DySample to expand feature maps, the lane detection head merges early-stage features and processes the output through the Focal Modulation Network (FMN). Lastly, the Minimum Point Distance IoU (MPDIoU) loss function is employed to compute the matching degree between traffic object detection boxes and predicted boxes, facilitating model training adjustments. Experimental results on the BDD100K dataset demonstrate that the proposed network achieves a drivable area segmentation mean intersection over union (mIoU) of 92.2%, lane detection accuracy and intersection over union (IoU) of 75.3% and 26.4%, respectively, and traffic object detection recall and mAP of 89.7% and 78.2%, respectively. The detection performance surpasses that of other single-task or multi-task algorithm models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Lane detection networks based on deep neural networks and temporal information.
- Author
-
Lin, Huei-Yung, Chang, Chun-Ke, and Tran, Van Luan
- Subjects
ARTIFICIAL neural networks ,DRIVER assistance systems ,DRIVERLESS cars ,TIME-varying networks ,INFORMATION networks ,CONVOLUTIONAL neural networks - Abstract
In the past few years, the lane detection technique has become a key factor for autonomous driving systems and self-driving cars on the road. Among the various vehicle subsystems, the lane detection module is one of the essential parts of the Advanced Driver Assistance System (ADAS). Conventional lane detection approaches use machine vision algorithms to find straight lines in road scene images. However, it is challenging to identify straight or curved lane markings in complex environments. To deal with this problem, this paper presents a lane detection technique based on deep learning. It is combined with a 3D convolutional network, so the temporal information is added to the network architecture. Using the front camera images, the system can immediately detect the lane marking information ahead. Moreover, we propose an approach for improvement by adding the time axis to the network architecture. In addition to using 3D-ResNet50, the temporal convolution and spatial convolution are separated for processing. The accuracy is 91.34% improved after adding time and space split convolution with LeakyReLU. The experiments carried out using real scene images have demonstrated the feasibility of the proposed technique for applications to various complex scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A Faster and Lightweight Lane Detection Method in Complex Scenarios.
- Author
-
Nie, Shuaiqi, Zhang, Guiheng, Yun, Libo, and Liu, Shuxian
- Subjects
DRIVER assistance systems ,FEATURE extraction - Abstract
Lane detection is a crucial visual perception task in the field of autonomous driving, serving as one of the core modules in advanced driver assistance systems (ADASs).To address the insufficient real-time performance of current segmentation-based models and the conflict between the demand for high inference speed and the excessive parameters in resource-constrained edge devices (such as onboard hardware, mobile terminals, etc.) in complex real-world scenarios, this paper proposes an efficient and lightweight auxiliary branch network (CBGA-Auxiliary) to tackle these issues. Firstly, to enhance the model's capability to extract feature information in complex scenarios, a row anchor-based feature extraction method based on global features was adopted. Secondly, employing ResNet as the backbone network and CBGA (Conv-Bn-GELU-SE Attention) as the fundamental module, we formed the auxiliary segmentation network, significantly enhancing the segmentation training speed of the model. Additionally, we replaced the standard convolutions in the branch network with lightweight GhostConv convolutions. This reduced the parameters and computational complexity while maintaining accuracy. Finally, an additional enhanced structural loss function was introduced to compensate for the structural defect loss issue inherent in the row anchor-based method, further improving the detection accuracy. The model underwent extensive experimentation on the Tusimple dataset and the CULane dataset, which encompass various road scenarios. The experimental results indicate that the model achieved the highest F1 scores of 96.1% and 71.0% on the Tusimple and CULane datasets, respectively. At a resolution of 288 × 800, the ResNet18 and ResNet34 models achieved maximum inference speeds of 410FPS and 280FPS, respectively. Compared to existing SOTA models, it demonstrates a significant advantage in terms of inference speed. The model achieved a good balance between accuracy and inference speed, making it suitable for deployment on edge devices and validates the effectiveness of the model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Vision-Based Multi-Stages Lane Detection Algorithm.
- Author
-
Faizi, Fayez Saeed and Al-sulaifanie, Ahmed Khorsheed
- Subjects
CONVOLUTIONAL neural networks ,DRIVERLESS cars ,AUTONOMOUS vehicles ,ALGORITHMS - Abstract
Lane detection is an essential task for autonomous vehicles. Deep learning-based lane detection methods are leading development in this sector. This paper proposes an algorithm named Deep Learning-based Lane Detection (DLbLD), a Convolutional Neural Network (CNN)-based lane detection algorithm. The presented paradigm deploys CNN to detect line features in the image block, predict a point on the lane line part, and project all the detected points for each frame into one-dimensional form before applying K-mean clustering to assign points to related lane lines. Extensive tests on different benchmarks were done to evaluate the performance of the proposed algorithm. The results demonstrate that the introduced DLbLD scheme achieves state-of-the-art performance, where F1 scores of 97.19 and 79.02 have been recorded for TuSimple and CU-Lane benchmarks, respectively. Nevertheless, results indicate the high accuracy of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Japanese Road Lane Line Recognition Based on TwinLiteNet
- Author
-
Yanqiao, Li, Haoran, Ji, Karungaru, Stephen, Terada, Kenji, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, and Ma, Yongsheng, editor
- Published
- 2024
- Full Text
- View/download PDF
18. FPGA-Based DNN Implementation for the Autonomous Car System
- Author
-
Lam, Duc Khai, Vy Vo, Dang Nhat, Anh Pham, Xuan Tuan, Thinh Ngo, Ha Quang, Chlamtac, Imrich, Series Editor, Hai, Nguyen Thanh, editor, Huy, Nguyen Xuan, editor, Amine, Khalil, editor, and Lam, Tran Dai, editor
- Published
- 2024
- Full Text
- View/download PDF
19. oneAPI-Based Design and Development of an Advanced Driver Assistance and Monitoring System Utilizing Embedded Machine Vision Technology
- Author
-
Murugesh, T. S., Vasudevan, Shriram K., Pulari, Sini Raj, Dantu, Nitin Vamsi, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Hassanien, Aboul Ella, editor, Anand, Sameer, editor, Jaiswal, Ajay, editor, and Kumar, Prabhat, editor
- Published
- 2024
- Full Text
- View/download PDF
20. Car Assistance System with Drowsiness Detection, Lane Detection and Speed Monitoring
- Author
-
Kapoor, Anjali, Mishra, Anju, Jangra, Vivek, Singh, Ajeet, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Santosh, K. C., editor, Sood, Sandeep Kumar, editor, Pandey, Hari Mohan, editor, and Virmani, Charu, editor
- Published
- 2024
- Full Text
- View/download PDF
21. Achieving High-Precision Localization in Self-driving Cars Using Real-Time Visual-Based Systems
- Author
-
Viet, Pham Tuan, Hung, Phan Duy, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, and Luo, Yuhua, editor
- Published
- 2024
- Full Text
- View/download PDF
22. An Intelligent Self-Driving Car’s Design and Development, Including Lane Detection Using ROS and Machine Vision Algorithms
- Author
-
Sujatha, E., Sundar, J. Sathiya Jeba, Raju, D. Naveen, Lakshminarayanan, S., Suganthi, N., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Rathore, Vijay Singh, editor, Tavares, Joao Manuel R. S., editor, Surendiran, B., editor, and Yadav, Anil, editor
- Published
- 2024
- Full Text
- View/download PDF
23. Lurking in the Shadows: Imperceptible Shadow Black-Box Attacks Against Lane Detection Models
- Author
-
Cui, Xiaoshu, Wu, Yalun, Gu, Yanfeng, Li, Qiong, Tong, Endong, Liu, Jiqiang, Niu, Wenjia, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Cao, Cungeng, editor, Chen, Huajun, editor, Zhao, Liang, editor, Arshad, Junaid, editor, Asyhari, Taufiq, editor, and Wang, Yonghao, editor
- Published
- 2024
- Full Text
- View/download PDF
24. Unveiling Superior Lane Detection Techniques Through the Synergistic Fusion of Attention-Based Vision Transformers and Dense Convolutional Neural Networks
- Author
-
Das, Subhranil, Kumari, Rashmi, Kumar, Ankit, Thakur, Abhishek, Singh, Raghwendra Kishore, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Hassanien, Aboul Ella, editor, Anand, Sameer, editor, Jaiswal, Ajay, editor, and Kumar, Prabhat, editor
- Published
- 2024
- Full Text
- View/download PDF
25. Semi-Automated Vehicle Controlled Using Wi-Fi
- Author
-
Dhanush, D. K. V., Murari, Ch., Manjunath, K., Saketh, G., Owk, Mrudula, Lin, Frank M., editor, Patel, Ashokkumar, editor, Kesswani, Nishtha, editor, and Sambana, Bosubabu, editor
- Published
- 2024
- Full Text
- View/download PDF
26. Perspective Transform-Based Lane Detection for Lane Keep Assistance
- Author
-
Gireesha, H. M., Aarya, K. H., Sahana, B. S., Lalita, J. S., Abhishek, V. S., Nissimagoudar, P. C., Iyer, Nalini C., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Joshi, Amit, editor, Mahmud, Mufti, editor, Ragel, Roshan G., editor, and Karthik, S., editor
- Published
- 2024
- Full Text
- View/download PDF
27. Fourier-Based Instance Selective Whitening for Domain Generalized Lane Detection
- Author
-
Xu, Weiyan, Wei, Shikui, Xu, Sen, Tan, Chuangchuang, Zhang, Shengli, Zhao, Yao, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Hong, Wenxing, editor, and Kanaparan, Geetha, editor
- Published
- 2024
- Full Text
- View/download PDF
28. Incorporating Computer Vision and Machine Learning for Lane and Curve Detection in Vehicle Mobility
- Author
-
Nabou, Abdellah, Al-Tameemi, Atheer L. Salih, Abdelwahed, El Hassan, Jaouar, Maria, El-Ouardi, Yassine, Tandia, Hadiya, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Ezziyyani, Mostafa, editor, and Balas, Valentina Emilia, editor
- Published
- 2024
- Full Text
- View/download PDF
29. A Review of a Research in Autonomous Vehicles with Embedded Systems
- Author
-
Akdeniz, Fulya, Atay, Mert, Vural, Şule, Savaş, Burcu Kır, Becerikli, Yaşar, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Ben Ahmed, Mohamed, editor, Boudhir, Anouar Abdelhakim, editor, El Meouche, Rani, editor, and Karaș, İsmail Rakıp, editor
- Published
- 2024
- Full Text
- View/download PDF
30. LANet: A Single Stage Lane Detector with Lightweight Attention
- Author
-
Xie, Qiangbin, Zhao, Xiao, Zhang, Lihua, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Fang, Lu, editor, Pei, Jian, editor, Zhai, Guangtao, editor, and Wang, Ruiping, editor
- Published
- 2024
- Full Text
- View/download PDF
31. Model Distillation for Lane Detection on Car-Level Chips
- Author
-
Wei, Zixiong, Wang, Zerun, Chen, Hui, Shao, Tianyu, Huang, Lihong, Kang, Xiaoyun, Tian, Xiang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Fang, Lu, editor, Pei, Jian, editor, Zhai, Guangtao, editor, and Wang, Ruiping, editor
- Published
- 2024
- Full Text
- View/download PDF
32. Addressing Vehicle Safety and Platooning Using Low-Cost Object Detection Algorithms
- Author
-
Sharma, Prathmesh, Gangwar, Priti, Gupta, Ritik, Mittal, Poornima, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Mehta, Gayatri, editor, Wickramasinghe, Nilmini, editor, and Kakkar, Deepti, editor
- Published
- 2024
- Full Text
- View/download PDF
33. Research and Application of Lane Intelligent Detection System Based on Internet of Things Technology
- Author
-
Xiong, ZhiWen, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Hung, Jason C., editor, Yen, Neil, editor, and Chang, Jia-Wei, editor
- Published
- 2024
- Full Text
- View/download PDF
34. Cross-Task Physical Adversarial Attack Against Lane Detection System Based on LED Illumination Modulation
- Author
-
Fang, Junbin, Yang, Zewei, Dai, Siyuan, Jiang, You, Jiang, Canjian, Jiang, Zoe L., Liu, Chuanyi, Yiu, Siu-Ming, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Liu, Qingshan, editor, Wang, Hanzi, editor, Ma, Zhanyu, editor, Zheng, Weishi, editor, Zha, Hongbin, editor, Chen, Xilin, editor, Wang, Liang, editor, and Ji, Rongrong, editor
- Published
- 2024
- Full Text
- View/download PDF
35. CCLane: Concise Curve Anchor-Based Lane Detection Model with MLP-Mixer
- Author
-
Yang, Fan, Zhao, Yanan, Gao, Li, Tan, Huachun, Liu, Weijin, Chen, Xue-mei, Yang, Shijuan, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Liu, Qingshan, editor, Wang, Hanzi, editor, Ma, Zhanyu, editor, Zheng, Weishi, editor, Zha, Hongbin, editor, Chen, Xilin, editor, Wang, Liang, editor, and Ji, Rongrong, editor
- Published
- 2024
- Full Text
- View/download PDF
36. Lane Detection in Autonomous Vehicles Using AI
- Author
-
Saranya, M., Archana, N., Janani, M., Keerthishree, R., Chlamtac, Imrich, Series Editor, Naganathan, Archana, editor, Jayarajan, Niresh, editor, and Bin Ibne Reaz, Mamun, editor
- Published
- 2024
- Full Text
- View/download PDF
37. CSA-Lanenet: a contiguous spatial attention lane detection network with vision transformer modules
- Author
-
Yang, Wei-Jong and Ho, Li-Yang
- Published
- 2024
- Full Text
- View/download PDF
38. A Robust Target Detection Algorithm Based on the Fusion of Frequency-Modulated Continuous Wave Radar and a Monocular Camera.
- Author
-
Yang, Yanqiu, Wang, Xianpeng, Wu, Xiaoqin, Lan, Xiang, Su, Ting, and Guo, Yuehao
- Subjects
- *
CONTINUOUS wave radar , *FAST Fourier transforms , *MONOCULARS , *MULTISENSOR data fusion , *ALGORITHMS - Abstract
Decision-level information fusion methods using radar and vision usually suffer from low target matching success rates and imprecise multi-target detection accuracy. Therefore, a robust target detection algorithm based on the fusion of frequency-modulated continuous wave (FMCW) radar and a monocular camera is proposed to address these issues in this paper. Firstly, a lane detection algorithm is used to process the image to obtain lane information. Then, two-dimensional fast Fourier transform (2D-FFT), constant false alarm rate (CFAR), and density-based spatial clustering of applications with noise (DBSCAN) are used to process the radar data. Furthermore, the YOLOv5 algorithm is used to process the image. In addition, the lane lines are utilized to filter out the interference targets from outside lanes. Finally, multi-sensor information fusion is performed for targets in the same lane. Experiments show that the balanced score of the proposed algorithm can reach 0.98, which indicates that it has low false and missed detections. Additionally, the balanced score is almost unchanged in different environments, proving that the algorithm is robust. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. FF-HPINet: A Flipped Feature and Hierarchical Position Information Extraction Network for Lane Detection.
- Author
-
Zhou, Xiaofeng and Zhang, Peng
- Subjects
- *
DATA mining , *INFORMATION networks , *FEATURE extraction , *DEEP learning , *AUTONOMOUS vehicles - Abstract
Effective lane detection technology plays an important role in the current autonomous driving system. Although deep learning models, with their intricate network designs, have proven highly capable of detecting lanes, there persist key areas requiring attention. Firstly, the symmetry inherent in visuals captured by forward-facing automotive cameras is an underexploited resource. Secondly, the vast potential of position information remains untapped, which can undermine detection precision. In response to these challenges, we propose FF-HPINet, a novel approach for lane detection. We introduce the Flipped Feature Extraction module, which models pixel pairwise relationships between the flipped feature and the original feature. This module allows us to capture symmetrical features and obtain high-level semantic feature maps from different receptive fields. Additionally, we design the Hierarchical Position Information Extraction module to meticulously mine the position information of the lanes, vastly improving target identification accuracy. Furthermore, the Deformable Context Extraction module is proposed to distill vital foreground elements and contextual nuances from the surrounding environment, yielding focused and contextually apt feature representations. Our approach achieves excellent performance with the F1 score of 97.00% on the TuSimple dataset and 76.84% on the CULane dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A novel data augmentation approach for ego-lane detection enhancement.
- Author
-
Yousri, Retaj, Moussa, Kareem, Elattar, Mustafa A., Madian, Ahmed H., and Darweesh, M. Saeed
- Abstract
Utilizing vast annotated datasets for supervised training of deep learning models is an absolute necessity. The focus of this paper is to demonstrate a supervisory training technique using perspective transformation-based data augmentation to train various cutting-edge architectures for the ego-lane detection task. Creating a reliable dataset for training such models has been challenging due to the lack of efficient augmentation methods that can produce new annotated images without missing important features about the lane or the road. Based on extensive experiments for training the three architectures: SegNet, U-Net, and ResUNet++, we show that the perspective transformation data augmentation strategy noticeably improves the performance of the models. The model achieved validation dice of 0.991 when ResUNET++ was trained on data of size equal to 6000 using the PTA method and achieved a dice coefficient of 96.04% when had been tested on the KITTI Lane benchmark, which contains 95 images for different urban scenes, which exceeds the results of the other papers. An ensemble learning approach is also introduced while testing the models to achieve the most robust performance under various challenging conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Polylanenet++: enhancing the polynomial regression lane detection based on spatio-temporal fusion.
- Author
-
Yang, Chuanwu, Tian, Zhihui, You, Xinge, Jia, Kang, Liu, Tong, Pan, Zhibin, and John, Vijay
- Abstract
Deep learning has made significant progress in lane detection across various public datasets, with models, such as PolyLaneNet, being computationally efficient. However, these models have limited spatial generalization capabilities, which ultimately lead to decreased accuracy. To address this issue, we propose a polynomial regression-based deep learning model that enhances spatial generalization and incorporates temporal information to improve the accuracy. Our model has been tested on public datasets, such as TuSimple and VIL100, and the results show that it outperforms PolyLaneNet and achieves state-of-the-art results. Incorporation of temporal information is also advantageous. Overall, our proposed framework offers improved accuracy and practicality in real-time applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. PortLaneNet: A Scene-Aware Model for Robust Lane Detection in Container Terminal Environments.
- Author
-
Ye, Haixiong, Kang, Zhichao, Zhou, Yue, Zhang, Chenhe, Wang, Wei, and Zhang, Xiliang
- Subjects
ROAD markings ,ARTIFICIAL intelligence ,CONTAINER terminals ,PRIOR learning ,DEEP learning - Abstract
In this paper, we introduce PortLaneNet, an optimized lane detection model specifically designed for the unique challenges of enclosed container terminal environments. Unlike conventional lane detection scenarios, this model addresses complexities such as intricate ground markings, tire crane lane lines, and various types of regional lines that significantly complicate detection tasks. Our approach includes the novel Scene Prior Perception Module, which leverages pre-training to provide essential prior information for more accurate lane detection. This module capitalizes on the enclosed nature of container terminals, where images from similar area scenes offer effective prior knowledge to enhance detection accuracy. Additionally, our model significantly improves understanding by integrating both high- and low-level image features through attention mechanisms, focusing on the critical components of lane detection. Through rigorous experimentation, PortLaneNet has demonstrated superior performance in port environments, outperforming traditional lane detection methods. The results confirm the effectiveness and superiority of our model in addressing the complex challenges of lane detection in such specific settings. Our work provides a valuable reference for solving lane detection issues in specialized environments and proposes new ideas and directions for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Optimal Traffic Control for a Tandem Intersection With Improved Lane Assignments at Presignals.
- Author
-
Wan, Jian, Wang, Chunguang, and Bie, Yiming
- Abstract
This article focuses on the optimal traffic control strategy for a tandem intersection with presignals. First, the impact of a lane assignment scheme at presignals on the operational performance of the tandem intersection is analyzed. An improved lane assignment method is thus proposed to avoid setting the lanes with the same traffic flow direction adjacent to each other as far as possible. Second, vehicle delay models behind the presignals and in the sorting area are established, respectively. Third, considering green times of presignal phases and sorting area lengths on all approaches as variables, an optimization model is proposed to minimize the average vehicle delay. Finally, a case study is conducted on a congested intersection in Changchun, China, to validate the proposed method. The effects of the proposed improved lane assignment scheme on the tandem intersection are analyzed compared with the traditional lane assignment scheme. The results show that the former can decrease average vehicle delay by 22.9% and increase intersection capacity by 18.6%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. U-Net-Based Learning Using Enhanced Lane Detection with Directional Lane Attention Maps for Various Driving Environments.
- Author
-
Lee, Seung-Hwan and Lee, Sung-Hak
- Subjects
- *
DRIVER assistance systems , *OPTICAL sensors - Abstract
Recent advancements in optical and electronic sensor technologies, coupled with the proliferation of computing devices (such as GPUs), have enabled real-time autonomous driving systems to become a reality. Hence, research in algorithmic advancements for advanced driver assistance systems (ADASs) is rapidly expanding, with a primary focus on enhancing robust lane detection capabilities to ensure safe navigation. Given the widespread adoption of cameras on the market, lane detection relies heavily on image data. Recently, CNN-based methods have attracted attention due to their effective performance in lane detection tasks. However, with the expansion of the global market, the endeavor to achieve reliable lane detection has encountered challenges presented by diverse environmental conditions and road scenarios. This paper presents an approach that focuses on detecting lanes in road areas traversed by vehicles equipped with cameras. In the proposed method, a U-Net based framework is employed for training, and additional lane-related information is integrated into a four-channel input data format that considers lane characteristics. The fourth channel serves as the edge attention map (E-attention map), helping the modules achieve more specialized learning regarding the lane. Additionally, the proposition of an approach to assign weights to the loss function during training enhances the stability and speed of the learning process, enabling robust lane detection. Through ablation experiments, the optimization of each parameter and the efficiency of the proposed method are demonstrated. Also, the comparative analysis with existing CNN-based lane detection algorithms shows that the proposed training method demonstrates superior performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. A Fast and Accurate Lane Detection Method Based on Row Anchor and Transformer Structure.
- Author
-
Chai, Yuxuan, Wang, Shixian, and Zhang, Zhijia
- Subjects
- *
TRANSFORMER models , *DRIVER assistance systems , *ROAD markings , *ELECTRIC transformers , *POWER transformers - Abstract
Lane detection plays a pivotal role in the successful implementation of Advanced Driver Assistance Systems (ADASs), which are essential for detecting the road's lane markings and determining the vehicle's position, thereby influencing subsequent decision making. However, current deep learning-based lane detection methods encounter challenges. Firstly, the on-board hardware limitations necessitate an exceptionally fast prediction speed for the lane detection method. Secondly, improvements are required for effective lane detection in complex scenarios. This paper addresses these issues by enhancing the row-anchor-based lane detection method. The Transformer encoder–decoder structure is leveraged as the row classification enhances the model's capability to extract global features and detect lane lines in intricate environments. The Feature-aligned Pyramid Network (FaPN) structure serves as an auxiliary branch, complemented by a novel structural loss with expectation loss, further refining the method's accuracy. The experimental results demonstrate our method's commendable accuracy and real-time performance, achieving a rapid prediction speed of 129 FPS (the single prediction time of the model on RTX3080 is 15.72 ms) and a 96.16% accuracy on the Tusimple dataset—a 3.32% improvement compared to the baseline method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Deep learning-based path tracking control using lane detection and traffic sign detection for autonomous driving.
- Author
-
Jaiswal, Swati and Mohan, B. Chandra
- Subjects
- *
DEEP learning , *TRAFFIC signs & signals , *TRAFFIC monitoring , *CONVOLUTIONAL neural networks , *TRAFFIC lanes , *AUTONOMOUS vehicles , *SIGNAL detection - Abstract
Automated vehicles are a significant advancement in transportation technique, which provides safe, sustainable, and reliable transport. Lane detection, maneuver forecasting, and traffic sign recognition are the fundamentals of automated vehicles. Hence, this research focuses on developing a dynamic real-time decision-making system to obtain an effective driving experience in autonomous vehicles with the advancement of deep learning techniques. The deep learning classifier such as deep convolutional neural network (Deep CNN), SegNet and are utilized in this research for traffic signal detection, road segmentation, and lane detection. The main highlight of the research relies on the proposed Finch Hunt optimization, which involves the hyperparameter tuning of a deep learning classifier. The proposed real-time decision-making system achieves 97.44% accuracy, 97.56% of sensitivity, and 97.83% of specificity. Further, the proposed segmentation model achieves the highest clustering accuracy with 90.37% and the proposed lane detection model attains the lowest mean absolute error, mean square error, and root mean error of 17.76%, 11.32%, and 5.66% respectively. The proposed road segmentation model exceeds all the competent models in terms of clustering accuracy. Finally, the proposed model provides a better output for lane detection with minimum error, when compared with the existing model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Computer Vision-Based Lane Detection and Detection of Vehicle, Traffic Sign, Pedestrian Using YOLOv5.
- Author
-
Öztürk, Gülyeter, Eldoğan, Osman, and Köker, Raşit
- Subjects
- *
TRAFFIC signs & signals , *OBJECT recognition (Computer vision) , *COMPUTER vision , *DEEP learning , *PEDESTRIANS , *CONVOLUTIONAL neural networks , *TRAFFIC accidents - Abstract
There has been a global increase in the number of vehicles in use, resulting in a higher occurrence of traffic accidents. Advancements in computer vision and deep learning enable vehicles to independently perceive and navigate their environment, making decisions that enhance road safety and reduce traffic accidents. Worldwide accidents can be prevented in both driver-operated and autonomous vehicles by detecting living and inanimate objects such as vehicles, pedestrians, animals, and traffic signs in the environment, as well as identifying lanes and obstacles. In our proposed system, road images are captured using a camera positioned behind the front windshield of the vehicle. Computer vision techniques are employed to detect straight or curved lanes in the captured images. The right and left lanes within the driving area of the vehicle are identified, and the drivable area of the vehicle is highlighted with a different color. To detect traffic signs, pedestrians, cars, and bicycles around the vehicle, we utilize the YOLOv5 model, which is based on Convolutional Neural Networks. We use a combination of study-specific images and the GRAZ dataset in our research. In the object detection study, which involves 10 different objects, we evaluate the performance of five different versions of the YOLOv5 model. Our evaluation metrics include precision, recall, precision-recall curves, F1 score, and mean average precision. The experimental results clearly demonstrate the effectiveness of our proposed lane detection and object detection method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. BIPOOLNET: An advanced UNet architecture for enhanced lane detection in autonomous vehicles.
- Author
-
P, Santhiya, Jebadurai, Immanuel JohnRaja, Paulraj, Getzi Jeba Leelipushpam, and A, Jenefa
- Subjects
URBAN ecology ,AUTONOMOUS vehicles ,PYRAMIDS - Abstract
In the rapidly evolving landscape of autonomous vehicle technology, the imperative to bolster safety within smart urban ecosystems has never been more critical. This endeavor requires the deployment of advanced detection systems capable of navigating the intricacies of pedestrian, near-vehicle, and lane detection challenges, with a particular focus on the nuanced requirements of curved lane navigation – a domain where traditional AI models exhibit notable deficiencies. This paper introduces BIPOOLNET, an innovative encoder-decoder neural architecture, ingeniously augmented with a feature pyramid to facilitate the precise delineation of curved lane geometries. BIPOOLNET integrates max pooling and average pooling to extract critical features and mitigate the complexity of the feature map, redefining the benchmarks for lane detection technology. Rigorous evaluation using the TuSimple dataset underscores BIPOOLNET's exemplary performance, evidenced by an unprecedented accuracy rate of 98.45%, an F1-score of 98.17%, and notably minimal false positive (1.84%) and false negative (1.09%) rates. These findings not only affirm BIPOOLNET's supremacy over extant models but also signal a paradigm shift in enhancing the safety and navigational precision of autonomous vehicles, offering a scalable, robust solution to the multifaceted challenges posed by real-world driving dynamics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Real-Time Vehicle and Lane Detection using Modified OverFeat CNN: A Comprehensive Study on Robustness and Performance in Autonomous Driving.
- Author
-
Saikat, Monowar Hossain, Avi, Sonjoy Paul, Islam, Kazi Toriqul, Tahmina, Tanjida, Abdullah, Md Shahriar, and Imam, Touhid
- Subjects
AUTOMOBILE driving ,LIDAR ,GLOBAL Positioning System ,CAMERAS ,ROBUST control - Abstract
This examination researches the use of profound learning methods, explicitly utilizing Convolutional Brain Organizations (CNNs), for ongoing recognition of vehicles and path limits in roadway driving situations. The study investigates the performance of a modified Over Feat CNN architecture by making use of a comprehensive dataset that includes annotated frames captured by a variety of sensors, including cameras, LIDAR, radar, and GPS. The framework shows heartiness in identifying vehicles and anticipating path shapes in 3D while accomplishing functional rates of north of 10 Hz on different GPU setups. Vehicle bounding box predictions with high accuracy, resistance to occlusions, and efficient lane boundary identification are key findings. Quiet, the exploration underlines the likely materialness of this framework in the space of independent driving, introducing a promising road for future improvements in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. 基于ARM 嵌入式平台的车道线检测算法.
- Author
-
关恬恬 and 杨帆
- Abstract
Copyright of Chinese Journal of Liquid Crystal & Displays is the property of Chinese Journal of Liquid Crystal & Displays and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.