1. Convolutional Neural Networks Adapted for Regression Tasks: Predicting the Orientation of Straight Arrows on Marked Road Pavement Using Deep Learning and Rectified Orthophotography.
- Author
-
Cira, Calimanut-Ionut, Díaz-Álvarez, Alberto, Serradilla, Francisco, and Manso-Callejo, Miguel-Ángel
- Subjects
DEEP learning ,CONVOLUTIONAL neural networks ,HETEROSEXUALITY ,ORTHOPHOTOGRAPHY ,PAVEMENTS ,DECISION support systems ,DRIVERLESS cars ,ROADS - Abstract
Arrow signs found on roadway pavement are an important component of modern transportation systems. Given the rise in autonomous vehicles, public agencies are increasingly interested in accurately identifying and analysing detailed road pavement information to generate comprehensive road maps and decision support systems that can optimise traffic flow, enhance road safety, and provide complete official road cartographic support (that can be used in autonomous driving tasks). As arrow signs are a fundamental component of traffic guidance, this paper aims to present a novel deep learning-based approach to identify the orientation and direction of arrow signs on marked roadway pavements using high-resolution aerial orthoimages. The approach is based on convolutional neural network architectures (VGGNet, ResNet, Xception, and DenseNet) that are modified and adapted for regression tasks with a proposed learning structure, together with an ad hoc model, specially introduced for this task. Although the best-performing artificial neural network was based on VGGNet (VGG-19 variant), it only slightly surpassed the proposed ad hoc model in the average values of the R
2 score, mean squared error, and angular error by 0.005, 0.001, and 0.036, respectively, using the training set (the ad hoc model delivered an average R2 score, mean squared error, and angular error of 0.9874, 0.001, and 2.516, respectively). Furthermore, the ad hoc model's predictions using the test set were the most consistent (a standard deviation of the R2 score of 0.033 compared with the score of 0.042 achieved using VGG19), while being almost eight times more computationally efficient when compared with the VGG19 model (2,673,729 parameters vs VGG19′s 20,321,985 parameters). [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF