Back to Search
Start Over
E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning
- Source :
- ACL/IJCNLP (1)
- Publication Year :
- 2021
- Publisher :
- Association for Computational Linguistics, 2021.
-
Abstract
- Vision-language pre-training (VLP) on large-scale image-text pairs has achieved huge success for the cross-modal downstream tasks. The most existing pre-training methods mainly adopt a two-step training procedure, which firstly employs a pre-trained object detector to extract region-based visual features, then concatenates the image representation and text embedding as the input of Transformer to train. However, these methods face problems of using task-specific visual representation of the specific object detector for generic cross-modal understanding, and the computation inefficiency of two-stage pipeline. In this paper, we propose the first end-to-end vision-language pre-trained model for both V+L understanding and generation, namely E2E-VLP, where we build a unified Transformer framework to jointly learn visual representation, and semantic alignments between image and text. We incorporate the tasks of object detection and image captioning into pre-training with a unified Transformer encoder-decoder architecture for enhancing visual learning. An extensive set of experiments have been conducted on well-established vision-language downstream tasks to demonstrate the effectiveness of this novel VLP paradigm.<br />ACL2021 main conference
- Subjects :
- FOS: Computer and information sciences
Closed captioning
Computer Science - Computation and Language
Computer Science - Artificial Intelligence
Computer science
business.industry
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
Representation (systemics)
Pipeline (software)
Object detection
Set (abstract data type)
Artificial Intelligence (cs.AI)
Face (geometry)
Computer vision
Artificial intelligence
business
Computation and Language (cs.CL)
Visual learning
Transformer (machine learning model)
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
- Accession number :
- edsair.doi.dedup.....e307ce81a641887ae95d1807e277f66c
- Full Text :
- https://doi.org/10.18653/v1/2021.acl-long.42