1. Multimodal feature-wise co-attention method for visual question answering
- Author
-
Jincai Chen, Min Chen, Fuhao Zou, Sheng Zhang, Yuan-Fang Li, and Ping Lu
- Subjects
Artificial neural network ,business.industry ,Computer science ,Deep learning ,020206 networking & telecommunications ,02 engineering and technology ,Construct (python library) ,Machine learning ,computer.software_genre ,Image (mathematics) ,Hardware and Architecture ,Feature (computer vision) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Question answering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Dimension (data warehouse) ,Focus (optics) ,business ,computer ,Software ,Information Systems - Abstract
VQA attracts lots of researchers in recent years. It could be potentially applied to the remote consultation of COVID-19. Attention mechanisms provide an effective way of utilizing visual and question information selectively in visual question and answering (VQA). The attention methods of existing VQA models generally focus on spatial dimension. In other words, the attention is modeled as spatial probabilities that re-weights the image region or word token features. However, feature-wise attention cannot be ignored, as image and question representations are organized in both spatial and feature-wise modes. Taking the question “What is the color of the woman’s hair” for example, identifying the hair color attribute feature is as important as focusing on the hair region. In this paper, we propose a novel neural network module named “multimodal feature-wise attention module” (MulFA) to model the feature-wise attention. Extensive experiments show that MulFA is capable of filtering representations for feature refinement and leads to improved performance. By introducing MulFA modules, we construct an effective union feature-wise and spatial co-attention network (UFSCAN) model for VQA. Our evaluation on two large-scale VQA datasets, VQA 1.0 and VQA 2.0, shows that UFSCAN achieves performance competitive with state-of-the-art models.
- Published
- 2021
- Full Text
- View/download PDF