Back to Search Start Over

PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers

Authors :
Lin, Weizhe
Mei, Jingbiao
Chen, Jinghong
Byrne, Bill
Publication Year :
2024

Abstract

Large Multimodal Models (LMMs) excel in natural language and visual understanding but are challenged by exacting tasks such as Knowledge-based Visual Question Answering (KB-VQA) which involve the retrieval of relevant information from document collections to use in shaping answers to questions. We present an extensive training and evaluation framework, M2KR, for KB-VQA. M2KR contains a collection of vision and language tasks which we have incorporated into a single suite of benchmark tasks for training and evaluating general-purpose multi-modal retrievers. We use M2KR to develop PreFLMR, a pre-trained version of the recently developed Fine-grained Late-interaction Multi-modal Retriever (FLMR) approach to KB-VQA, and we report new state-of-the-art results across a range of tasks. We also present investigations into the scaling behaviors of PreFLMR intended to be useful in future developments in general-purpose multi-modal retrievers.<br />Comment: ACL 2024; Project page: https://preflmr.github.io/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.08327
Document Type :
Working Paper