Back to Search Start Over

A Dual Semantic-Aware Recurrent Global-Adaptive Network For Vision-and-Language Navigation

Authors :
Wang, Liuyi
He, Zongtao
Tang, Jiagui
Dang, Ronghao
Wang, Naijia
Liu, Chengju
Chen, Qijun
Source :
International Joint Conferences on Artificial Intelligence Organization 2023
Publication Year :
2023

Abstract

Vision-and-Language Navigation (VLN) is a realistic but challenging task that requires an agent to locate the target region using verbal and visual cues. While significant advancements have been achieved recently, there are still two broad limitations: (1) The explicit information mining for significant guiding semantics concealed in both vision and language is still under-explored; (2) The previously structured map method provides the average historical appearance of visited nodes, while it ignores distinctive contributions of various images and potent information retention in the reasoning process. This work proposes a dual semantic-aware recurrent global-adaptive network (DSRG) to address the above problems. First, DSRG proposes an instruction-guidance linguistic module (IGL) and an appearance-semantics visual module (ASV) for boosting vision and language semantic learning respectively. For the memory mechanism, a global adaptive aggregation module (GAA) is devised for explicit panoramic observation fusion, and a recurrent memory fusion module (RMF) is introduced to supply implicit temporal hidden states. Extensive experimental results on the R2R and REVERIE datasets demonstrate that our method achieves better performance than existing methods. Code is available at https://github.com/CrystalSixone/DSRG.<br />Comment: Accepted by IJCAI 2023

Details

Database :
arXiv
Journal :
International Joint Conferences on Artificial Intelligence Organization 2023
Publication Type :
Report
Accession number :
edsarx.2305.03602
Document Type :
Working Paper
Full Text :
https://doi.org/10.24963/ijcai.2023/164