Back to Search Start Over

Multi-scale context-aware network for continuous sign language recognition

Authors :
Senhua XUE
Liqing GAO
Liang WAN
Wei FENG
Source :
Virtual Reality & Intelligent Hardware, Vol 6, Iss 4, Pp 323-337 (2024)
Publication Year :
2024
Publisher :
KeAi Communications Co., Ltd., 2024.

Abstract

The hands and face are the most important parts for expressing sign language morphemes in sign language videos. However, we find that existing Continuous Sign Language Recognition (CSLR) methods lack the mining of hand and face information in visual backbones or use expensive and time-consuming external extractors to explore this information. In addition, the signs have different lengths, whereas previous CSLR methods typically use a fixed-length window to segment the video to capture sequential features and then perform global temporal modeling, which disturbs the perception of complete signs. In this study, we propose a Multi-Scale Context-Aware network (MSCA-Net) to solve the aforementioned problems. Our MSCA-Net contains two main modules: (1) Multi-Scale Motion Attention (MSMA), which uses the differences among frames to perceive information of the hands and face in multiple spatial scales, replacing the heavy feature extractors; and (2) Multi-Scale Temporal Modeling (MSTM), which explores crucial temporal information in the sign language video from different temporal scales. We conduct extensive experiments using three widely used sign language datasets, i.e., RWTH-PHOENIX-Weather-2014, RWTH-PHOENIX-Weather-2014T, and CSL-Daily. The proposed MSCA-Net achieve state-of-the-art performance, demonstrating the effectiveness of our approach.

Details

Language :
English
ISSN :
20965796
Volume :
6
Issue :
4
Database :
Directory of Open Access Journals
Journal :
Virtual Reality & Intelligent Hardware
Publication Type :
Academic Journal
Accession number :
edsdoj.527ce2617faa45119f035372cc1a679f
Document Type :
article
Full Text :
https://doi.org/10.1016/j.vrih.2023.06.011