Back to Search Start Over

Automatic benchmarking of large multimodal models via iterative experiment programming

Authors :
Conti, Alessandro
Fini, Enrico
Rota, Paolo
Wang, Yiming
Mancini, Massimiliano
Ricci, Elisa
Publication Year :
2024

Abstract

Assessing the capabilities of large multimodal models (LMMs) often requires the creation of ad-hoc evaluations. Currently, building new benchmarks requires tremendous amounts of manual work for each specific analysis. This makes the evaluation process tedious and costly. In this paper, we present APEx, Automatic Programming of Experiments, the first framework for automatic benchmarking of LMMs. Given a research question expressed in natural language, APEx leverages a large language model (LLM) and a library of pre-specified tools to generate a set of experiments for the model at hand, and progressively compile a scientific report. The report drives the testing procedure: based on the current status of the investigation, APEx chooses which experiments to perform and whether the results are sufficient to draw conclusions. Finally, the LLM refines the report, presenting the results to the user in natural language. Thanks to its modularity, our framework is flexible and extensible as new tools become available. Empirically, APEx reproduces the findings of existing studies while allowing for arbitrary analyses and hypothesis testing.<br />Comment: 31 pages, 6 figures, code is available at https://github.com/altndrr/apex

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.12321
Document Type :
Working Paper