1. Real-time spatial normalization for dynamic gesture classification
- Author
-
Aouaidjia Kamel, Sofiane Zeghoud, Saba Ghazanfar Ali, Xiaoyu Chi, Lijuan Mao, Jinman Kim, Ping Li, Bin Sheng, and Egemen Ertugrul
- Subjects
Normalization (statistics) ,Data collection ,Artificial neural network ,Computer science ,Generalization ,business.industry ,Pattern recognition ,Computer Graphics and Computer-Aided Design ,Gesture recognition ,Spatial normalization ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Spatial analysis ,Software ,Gesture - Abstract
In this paper, we provide a new spatial data generalization method which we applied in hand gesture recognition tasks. Data gathering can be a tedious task when it comes to gesture recognition, especially dynamic gestures. Nowadays, the standard solutions when lacking data still consist of either the expensive gathering of new data or the impractical employment of hand-crafted data augmentation algorithms. While these solutions may show improvement, they come with disadvantages. We believe that a better extrapolation of the limited data’s common pattern, through an improved generalization, should first be considered. We, therefore, propose a dynamic generalization method that allows to capture and normalize in real-time the spatial evolution of the input. The latter procedure can be fully converted into a neural network processing layer which we call Evolution Normalization Layer. Experimental results on the SHREC2017 dataset showed that the addition of the proposed layer improved the prediction accuracy of a standard sequence-processing model while requiring 6 times fewer weights on average for a similar score. Furthermore, when trained on only 10% of the original training data, the standard model was able to reach a maximum accuracy of only 36.5% alone and 56.8% when applying a state-of-the-art processing method to the data, whereas the addition of our layer alone permitted to achieve a prediction accuracy of 81.5%.
- Published
- 2021