Back to Search Start Over

A Graph Skeleton Transformer Network for Action Recognition.

Authors :
Jiang, Yujian
Sun, Zhaoneng
Yu, Saisai
Wang, Shuang
Song, Yang
Source :
Symmetry (20738994); Aug2022, Vol. 14 Issue 8, p1547-1547, 19p
Publication Year :
2022

Abstract

Skeleton-based action recognition is a research hotspot in the field of computer vision. Currently, the mainstream method is based on Graph Convolutional Networks (GCNs). Although there are many advantages of GCNs, GCNs mainly rely on graph topologies to draw dependencies between the joints, which are limited in capturing long-distance dependencies. Meanwhile, Transformer-based methods have been applied to skeleton-based action recognition because they effectively capture long-distance dependencies. However, existing Transformer-based methods lose the inherent connection information of human skeleton joints because they do not yet focus on initial graph structure information. This paper aims to improve the accuracy of skeleton-based action recognition. Therefore, a Graph Skeleton Transformer network (GSTN) for action recognition is proposed, which is based on Transformer architecture to extract global features, while using undirected graph information represented by the symmetric matrix to extract local features. Two encodings are utilized in feature processing to improve joints' semantic and centrality features. In the process of multi-stream fusion strategies, a grid-search-based method is used to assign weights to each input stream to optimize the fusion results. We tested our method using three action recognition datasets: NTU RGB+D 60, NTU RGB+D 120, and NW-UCLA. The experimental results show that our model's accuracy is comparable to state-of-the-art approaches. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20738994
Volume :
14
Issue :
8
Database :
Complementary Index
Journal :
Symmetry (20738994)
Publication Type :
Academic Journal
Accession number :
158943726
Full Text :
https://doi.org/10.3390/sym14081547