1. Road Anomaly Detection with Unknown Scenes Using DifferNet-Based Automatic Labeling Segmentation
- Author
-
Phuc Thanh-Thien Nguyen, Toan-Khoa Nguyen, Dai-Dong Nguyen, Shun-Feng Su, and Chung-Hsien Kuo
- Subjects
automated labeling ,drivable area and road obstacle detection ,mobile robots ,semantic segmentation ,transfer learning ,Engineering machinery, tools, and implements ,TA213-215 ,Technological innovations. Automation ,HD45-45.2 - Abstract
Obstacle avoidance is essential for the effective operation of autonomous mobile robots, enabling them to detect and navigate around obstacles in their environment. While deep learning provides significant benefits for autonomous navigation, it typically requires large, accurately labeled datasets, making the data’s preparation and processing time-consuming and labor-intensive. To address this challenge, this study introduces a transfer learning (TL)-based automatic labeling segmentation (ALS) framework. This framework utilizes a pretrained attention-based network, DifferNet, to efficiently perform semantic segmentation tasks on new, unlabeled datasets. DifferNet leverages prior knowledge from the Cityscapes dataset to identify high-entropy areas as road obstacles by analyzing differences between the input and resynthesized images. The resulting road anomaly map was refined using depth information to produce a robust drivable area and map of road anomalies. Several off-the-shelf RGB-D semantic segmentation neural networks were trained using pseudo-labels generated by the ALS framework, with validation conducted on the GMRPD dataset. Experimental results demonstrated that the proposed ALS framework achieved mean precision, mean recall, and mean intersection over union (IoU) rates of 80.31%, 84.42%, and 71.99%, respectively. The ALS framework, through the use of transfer learning and the DifferNet network, offers an efficient solution for semantic segmentation of new, unlabeled datasets, underscoring its potential for improving obstacle avoidance in autonomous mobile robots.
- Published
- 2024
- Full Text
- View/download PDF