Back to Search
Start Over
Spatio-Temporal Memory Attention for Image Captioning.
- Source :
- IEEE Transactions on Image Processing; 2020, Vol. 29, p7615-7628, 14p
- Publication Year :
- 2020
-
Abstract
- Visual attention has been successfully applied in image captioning to selectively incorporate the most relevant areas to the language generation procedure. However, the attention in current image captioning methods is only guided by the hidden state of language model, e.g. LSTM (Long-Short Term Memory), indirectly and implicitly, and thus the attended areas are weakly relevant at different time steps. Besides the spatial relationship of attention areas, the temporal relationship in attention is crucial for image captioning according to the attention transmission mechanism of human vision. In this paper, we propose a new spatio-temporal memory attention (STMA) model to learn the spatio-temporal relationship in attention for image captioning. The STMA introduces the memory mechanism to the attention model through a tailored LSTM, where the new cell is used to memorize and propagate the attention information, and the output gate is used to generate attention weights. The attention in STMA transmits with memory adaptively and dependently, which builds strong temporal connections of attentions and learns the spatio-temporal relationship of attended areas simultaneously. Besides, the proposed STMA is flexible to combine with attention-based image captioning frameworks. Experiments on MS COCO dataset demonstrate the superiority of the proposed STMA model in exploring the spatio-temporal relationship in attention and improving the current attention-based image captioning. [ABSTRACT FROM AUTHOR]
- Subjects :
- LANGUAGE models
ATTENTION
LANGUAGE policy
MEMORY
Subjects
Details
- Language :
- English
- ISSN :
- 10577149
- Volume :
- 29
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Image Processing
- Publication Type :
- Academic Journal
- Accession number :
- 170078513
- Full Text :
- https://doi.org/10.1109/TIP.2020.3004729