Back to Search Start Over

CStrCRL: Cross-View Contrastive Learning Through Gated GCN With Strong Augmentations for Skeleton Recognition

Authors :
Hu, Ruotong
Wang, Xianzhi
Chang, Xiaojun
Zhang, Yongle
Hu, Yeqi
Liu, Xinyuan
Yu, Shusong
Source :
IEEE Transactions on Circuits and Systems for Video Technology; August 2024, Vol. 34 Issue: 8 p6674-6685, 12p
Publication Year :
2024

Abstract

Contrastive learning has been widely embraced for its notable success along with two augmentation methods—normal and strong augmentations—in skeleton action recognition. Existing methods gain performance largely by customizing normal augmentations while bypassing strong augmentations that riches in motion patterns. To make up for the blank, we propose a novel framework, called CStrCRL, acquiring view-invariant and discriminative features from strong augmentations by leveraging contrastive learning. Specifically, to avoid the fragility of skeleton data adversely affecting the model after applying strong augmentations, we use consistency learning to maximize the similarity between strongly and normally augmented views. Furthermore, we employ cross-view learning on strong and normal augmentations for eliminating uncertainty feature boundaries learned by the model. Moreover, we design a new backbone, termed GatedStrNet, for discriminating valid and invalid features contained in strong augmented views. Finally, extensive experiments on NTU 60/120 and PKUMMD II demonstrate that the proposed method bridges the performance gap between normal and strong augmentations on contrastive learning of skeleton recognition. Notably, with a single stream input, CStrCRL achieves accuracies of 78.93% and 84.04% on the NTU60 Xsub and Xview datasets. Our source code can be found at: <uri>https://github.com/RHu-main/CStrCRL</uri>.

Details

Language :
English
ISSN :
10518215 and 15582205
Volume :
34
Issue :
8
Database :
Supplemental Index
Journal :
IEEE Transactions on Circuits and Systems for Video Technology
Publication Type :
Periodical
Accession number :
ejs67162602
Full Text :
https://doi.org/10.1109/TCSVT.2023.3312049