Back to Search Start Over

A parallel vision approach to scene-specific pedestrian detection

Authors :
Fei-Yue Wang
Kunfeng Wang
Lu Yue
Yating Liu
Wenwen Zhang
Source :
Neurocomputing. 394:114-126
Publication Year :
2020
Publisher :
Elsevier BV, 2020.

Abstract

In recent years, with the development of computing power and deep learning algorithms, pedestrian detection has made great progress. Nevertheless, once a detection model trained on generic datasets (such as PASCAL VOC and MS COCO) is applied to a specific scene, its precision is limited by the distribution gap between the generic data and the specific scene data. It is difficult to train the model for a specific scene, due to the lack of labeled data from that scene. Even though we manage to get some labeled data from a specific scene, the changing environmental conditions make the pre-trained model perform bad. In light of these issues, we propose a parallel vision approach to scene-specific pedestrian detection. Given an object detection model, it is trained via two sequential stages: (1) the model is pre-trained on augmented-reality data, to address the lack of scene-specific training data; (2) the pre-trained model is incrementally optimized with newly synthesized data as the specific scene evolves over time. On publicly available datasets, our approach leads to higher precision than the models trained on generic data. To tackle the dynamically changing scene, we further evaluate our approach on the webcam data collected from Church Street Market Place, and the results are also encouraging.

Details

ISSN :
09252312
Volume :
394
Database :
OpenAIRE
Journal :
Neurocomputing
Accession number :
edsair.doi...........3580ac8008408c969d1a0bd878afa45e