Back to Search
Start Over
Self-supervised action representation learning from partial consistency skeleton sequences.
- Source :
-
Neural Computing & Applications . Jul2024, Vol. 36 Issue 20, p12385-12395. 11p. - Publication Year :
- 2024
-
Abstract
- In recent years, self-supervised representation learning for skeleton-based action recognition has achieved remarkable results using skeleton sequences with the advance of contrastive learning methods. However, existing methods often overlook the local information within the skeleton data, so as to not efficiently learn fine-grained features. To leverage local features to enhance representation capacity and capture discriminative representations, we design an adaptive self-supervised contrastive learning framework for action recognition called AdaSCLR. In AdaSCLR, we introduce an adaptive spatiotemporal graph convolutional network to learn the topology of different samples and hierarchical levels and apply an attention mask module to extract salient and non-salient local features from the global features, emphasizing their significance and facilitating similarity-based learning. In addition, AdaSCLR extracts information from the upper and lower limbs as local features to assist the model in learning more discriminative representation. Experimental results show that our approach is better than the state-of-the-art methods on NTURGB+D, NTU120-RGB+D, and PKU-MMD datasets. [ABSTRACT FROM AUTHOR]
- Subjects :
- *RECOGNITION (Psychology)
*SKELETON
Subjects
Details
- Language :
- English
- ISSN :
- 09410643
- Volume :
- 36
- Issue :
- 20
- Database :
- Academic Search Index
- Journal :
- Neural Computing & Applications
- Publication Type :
- Academic Journal
- Accession number :
- 178316424
- Full Text :
- https://doi.org/10.1007/s00521-024-09671-5