1. Deep Convolutional Neural Networks for Human Action Recognition Using Depth Maps and Postures.
- Author
-
Kamel, Aouaidjia, Sheng, Bin, Yang, Po, Li, Ping, Shen, Ruimin, and Feng, David Dagan
- Subjects
HUMAN behavior ,HUMAN activity recognition ,DESCRIPTOR systems ,FEATURE extraction ,POSTURE ,IMAGE color analysis ,CONVOLUTIONAL neural networks - Abstract
In this paper, we present a method (Action-Fusion) for human action recognition from depth maps and posture data using convolutional neural networks (CNNs). Two input descriptors are used for action representation. The first input is a depth motion image that accumulates consecutive depth maps of a human action, whilst the second input is a proposed moving joints descriptor which represents the motion of body joints over time. In order to maximize feature extraction for accurate action classification, three CNN channels are trained with different inputs. The first channel is trained with depth motion images (DMIs), the second channel is trained with both DMIs and moving joint descriptors together, and the third channel is trained with moving joint descriptors only. The action predictions generated from the three CNN channels are fused together for the final action classification. We propose several fusion score operations to maximize the score of the right action. The experiments show that the results of fusing the output of three channels are better than using one channel or fusing two channels only. Our proposed method was evaluated on three public datasets: 1) Microsoft action 3-D dataset (MSRAction3D); 2) University of Texas at Dallas-multimodal human action dataset; and 3) multimodal action dataset (MAD) dataset. The testing results indicate that the proposed approach outperforms most of existing state-of-the-art methods, such as histogram of oriented 4-D normals and Actionlet on MSRAction3D. Although MAD dataset contains a high number of actions (35 actions) compared to existing action RGB-D datasets, this paper surpasses a state-of-the-art method on the dataset by 6.84%. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF