Back to Search Start Over

Evaluating the Capabilities of Multi-modal Reasoning Models with Synthetic Task Data

Authors :
Vaska, Nathan
Helus, Victoria
Publication Year :
2023

Abstract

The impressive advances and applications of large language and joint language-and-visual understanding models has led to an increased need for methods of probing their potential reasoning capabilities. However, the difficulty of gather naturally-occurring data for complex multi-modal reasoning tasks bottlenecks the evaluation of AI methods on tasks which are not already covered by an academic dataset. In this work, we leverage recent advances in high resolution text-to-image generation to develop a framework for generating evaluation data for multi-modal reasoning tasks. We apply this framework to generate context-dependent anomaly data, creating a synthetic dataset on a challenging task which is not well covered by existing datasets. We benchmark the performance of a state-of-the-art visual question answering (VQA) model against data generated with this method, and demonstrate that while the task is tractable, the model performs significantly worse on the context-dependent anomaly detection task than on standard VQA tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2306.01144
Document Type :
Working Paper