1. Learning rich features from objectness estimation for human lying-pose detection
- Author
-
Guo-Xi Wu, Li-Chuan Geng, Dao-Xun Xia, Songzhi Su, and Shaozi Li
- Subjects
Service robot ,Computer Networks and Communications ,Computer science ,Machine vision ,business.industry ,Orientation (computer vision) ,020207 software engineering ,02 engineering and technology ,Convolutional neural network ,Field (computer science) ,Perspective distortion ,Hardware and Architecture ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Pyramid (image processing) ,business ,Software ,Information Systems - Abstract
Lying-pose human detection is an active research field of computer vision in recent years. It has a good theoretical significance and furthermore many applications, such as victim detection or home service robot. But the study on lying-pose human detection in low-altitude overlooking images have many unsolved problems owing to multiple poses, arbitrary orientation, in-plane rotation, perspective distortion, and time-consuming. In this paper, the proposed framework of human lying-pose detection is optimization and machine learning algorithms inspired by processes of neurobiology suggest and human vision system to select possible object locations. First, the proposed model effectively utilizes binarized normed gradient features to obtain the objectness rapidly based on the vision saliency. Further, deep-learning techniques based on the convolution neural network are trained for learning rich feature hierarchies, in order to obtain the object of lying-pose human from objectness estimation, unlike the classical sliding-window algorithm. Eventually, employed pyramid mean-shift algorithm and rotation-angle recovery method to find position and direction of human lying-pose. The experimental results show that our method is rapid and efficient, and that it achieves state-of-the-art results with our XMULP dataset.
- Published
- 2016
- Full Text
- View/download PDF