Back to Search Start Over

E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning

Authors :
Ming Yan
Bin Bi
Wenming Xiao
Songfang Huang
Chenliang Li
Fei Huang
Haiyang Xu
Source :
ACL/IJCNLP (1)
Publication Year :
2021
Publisher :
Association for Computational Linguistics, 2021.

Abstract

Vision-language pre-training (VLP) on large-scale image-text pairs has achieved huge success for the cross-modal downstream tasks. The most existing pre-training methods mainly adopt a two-step training procedure, which firstly employs a pre-trained object detector to extract region-based visual features, then concatenates the image representation and text embedding as the input of Transformer to train. However, these methods face problems of using task-specific visual representation of the specific object detector for generic cross-modal understanding, and the computation inefficiency of two-stage pipeline. In this paper, we propose the first end-to-end vision-language pre-trained model for both V+L understanding and generation, namely E2E-VLP, where we build a unified Transformer framework to jointly learn visual representation, and semantic alignments between image and text. We incorporate the tasks of object detection and image captioning into pre-training with a unified Transformer encoder-decoder architecture for enhancing visual learning. An extensive set of experiments have been conducted on well-established vision-language downstream tasks to demonstrate the effectiveness of this novel VLP paradigm.<br />ACL2021 main conference

Details

Database :
OpenAIRE
Journal :
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Accession number :
edsair.doi.dedup.....e307ce81a641887ae95d1807e277f66c
Full Text :
https://doi.org/10.18653/v1/2021.acl-long.42