Back to Search Start Over

HaVTR: Improving Video-Text Retrieval Through Augmentation Using Large Foundation Models

Authors :
Wang, Yimu
Yuan, Shuai
Jian, Xiangru
Pang, Wei
Wang, Mushi
Yu, Ning
Publication Year :
2024

Abstract

While recent progress in video-text retrieval has been driven by the exploration of powerful model architectures and training strategies, the representation learning ability of video-text retrieval models is still limited due to low-quality and scarce training data annotations. To address this issue, we present a novel video-text learning paradigm, HaVTR, which augments video and text data to learn more generalized features. Specifically, we first adopt a simple augmentation method, which generates self-similar data by randomly duplicating or dropping subwords and frames. In addition, inspired by the recent advancement in visual and language generative models, we propose a more powerful augmentation method through textual paraphrasing and video stylization using large language models (LLMs) and visual generative models (VGMs). Further, to bring richer information into video and text, we propose a hallucination-based augmentation method, where we use LLMs and VGMs to generate and add new relevant information to the original data. Benefiting from the enriched data, extensive experiments on several video-text retrieval benchmarks demonstrate the superiority of HaVTR over existing methods.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.05083
Document Type :
Working Paper