Back to Search
Start Over
Achieving Human Parity on Visual Question Answering.
- Source :
- ACM Transactions on Information Systems; Jul2023, Vol. 41 Issue 3, p1-40, 40p
- Publication Year :
- 2023
-
Abstract
- The Visual Question Answering (VQA) task utilizes both visual image and language analysis to answer a textual question with respect to an image. It has been a popular research topic with an increasing number of real-world applications in the last decade. This paper introduces a novel hierarchical integration of vision and language AliceMind-MMU (ALIbaba’s Collection of Encoder-decoders from Machine IntelligeNce lab of Damo academy - MultiMedia Understanding), which leads to similar or even slightly better results than a human being does on VQA. A hierarchical framework is designed to tackle the practical problems of VQA in a cascade manner including: (1) diverse visual semantics learning for comprehensive image content understanding; (2) enhanced multi-modal pre-training with modality adaptive attention; and (3) a knowledge-guided model integration with three specialized expert modules for the complex VQA task. Treating different types of visual questions with corresponding expertise needed plays an important role in boosting the performance of our VQA architecture up to the human level. An extensive set of experiments and analysis are conducted to demonstrate the effectiveness of the new research work. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 10468188
- Volume :
- 41
- Issue :
- 3
- Database :
- Complementary Index
- Journal :
- ACM Transactions on Information Systems
- Publication Type :
- Academic Journal
- Accession number :
- 163619591
- Full Text :
- https://doi.org/10.1145/3572833