Back to Search
Start Over
Person Re-identification on Heterogeneous Camera Network
- Source :
- Communications in Computer and Information Science ISBN: 9789811073045, CCCV (3)
- Publication Year :
- 2017
- Publisher :
- Springer Singapore, 2017.
-
Abstract
- Person re-identification (re-id) aims at matching person images across multiple surveillance cameras. Currently, most re-id systems highly rely on color cues, which are only effective in good illumination conditions, but fail in low lighting conditions. However, for security issues, it is very important to conduct surveillance in low lighting conditions. To remedy this problem, we propose using depth cameras to perform surveillance in dark places, while using traditional RGB cameras in bright places. Such a heterogeneous camera network brings a challenge to match images across depth and RGB cameras. In this paper, we mine the correlation between two heterogeneous cues (depth and RGB) on both feature-level and transformation-level. As such, depth-based features and RGB-based features are transformed to the same space, which alleviates the problem of cross-modality matching between depth and RGB cameras. Experimental results on two benchmark heterogeneous person re-id datasets show the effectiveness of our method.
- Subjects :
- Matching (statistics)
Computer science
business.industry
05 social sciences
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
050301 education
020206 networking & telecommunications
02 engineering and technology
Space (commercial competition)
Re identification
Camera network
Color cues
0202 electrical engineering, electronic engineering, information engineering
Benchmark (computing)
RGB color model
Computer vision
Artificial intelligence
business
0503 education
Subjects
Details
- ISBN :
- 978-981-10-7304-5
- ISBNs :
- 9789811073045
- Database :
- OpenAIRE
- Journal :
- Communications in Computer and Information Science ISBN: 9789811073045, CCCV (3)
- Accession number :
- edsair.doi...........bd55756173c85b2118e13dbd6a5caba2
- Full Text :
- https://doi.org/10.1007/978-981-10-7305-2_25