Back to Search Start Over

NTSEBENCH: Cognitive Reasoning Benchmark for Vision Language Models

Authors :
Pandya, Pranshu
Talwarr, Agney S
Gupta, Vatsal
Kataria, Tushar
Gupta, Vivek
Roth, Dan
Publication Year :
2024

Abstract

Cognitive textual and visual reasoning tasks, such as puzzles, series, and analogies, demand the ability to quickly reason, decipher, and evaluate patterns both textually and spatially. While LLMs and VLMs, through extensive training on large amounts of human-curated data, have attained a high level of pseudo-human intelligence in some common sense reasoning tasks, they still struggle with more complex reasoning tasks that require cognitive understanding. In this work, we introduce a new dataset, NTSEBench, designed to evaluate the cognitive multi-modal reasoning and problem-solving skills of large models. The dataset comprises 2,728 multiple-choice questions comprising of a total of 4,642 images across 26 categories sampled from the NTSE examination conducted nationwide in India, featuring both visual and textual general aptitude questions that do not rely on rote learning. We establish baselines on the dataset using state-of-the-art LLMs and VLMs. To facilitate a comparison between open source and propriety models, we propose four distinct modeling strategies to handle different modalities (text and images) in the dataset instances.<br />Comment: 15 pages, 2 figures, 5 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.10380
Document Type :
Working Paper