Back to Search Start Over

An Efficient Sign Language Translation Using Spatial Configuration and Motion Dynamics with LLMs

Authors :
Hwang, Eui Jun
Cho, Sukmin
Lee, Junmyeong
Park, Jong C.
Publication Year :
2024

Abstract

Gloss-free Sign Language Translation (SLT) converts sign videos directly into spoken language sentences without relying on glosses. Recently, Large Language Models (LLMs) have shown remarkable translation performance in gloss-free methods by harnessing their powerful natural language generation capabilities. However, these methods often rely on domain-specific fine-tuning of visual encoders to achieve optimal results. By contrast, this paper emphasizes the importance of capturing the spatial configurations and motion dynamics inherent in sign language. With this in mind, we introduce Spatial and Motion-based Sign Language Translation (SpaMo), a novel LLM-based SLT framework. The core idea of SpaMo is simple yet effective. We first extract spatial and motion features using off-the-shelf visual encoders and then input these features into an LLM with a language prompt. Additionally, we employ a visual-text alignment process as a warm-up before the SLT supervision. Our experiments demonstrate that SpaMo achieves state-of-the-art performance on two popular datasets, PHOENIX14T and How2Sign.<br />Comment: Under Review

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.10593
Document Type :
Working Paper