Back to Search Start Over

Multimodal Table Understanding

Authors :
Zheng, Mingyu
Feng, Xinwei
Si, Qingyi
She, Qiaoqiao
Lin, Zheng
Jiang, Wenbin
Wang, Weiping
Publication Year :
2024

Abstract

Although great progress has been made by previous table understanding methods including recent approaches based on large language models (LLMs), they rely heavily on the premise that given tables must be converted into a certain text sequence (such as Markdown or HTML) to serve as model input. However, it is difficult to access such high-quality textual table representations in some real-world scenarios, and table images are much more accessible. Therefore, how to directly understand tables using intuitive visual information is a crucial and urgent challenge for developing more practical applications. In this paper, we propose a new problem, multimodal table understanding, where the model needs to generate correct responses to various table-related requests based on the given table image. To facilitate both the model training and evaluation, we construct a large-scale dataset named MMTab, which covers a wide spectrum of table images, instructions and tasks. On this basis, we develop Table-LLaVA, a generalist tabular multimodal large language model (MLLM), which significantly outperforms recent open-source MLLM baselines on 23 benchmarks under held-in and held-out settings. The code and data is available at this https://github.com/SpursGoZmy/Table-LLaVA<br />Comment: 23 pages, 16 figures, ACL 2024 main conference, camera-ready version

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.08100
Document Type :
Working Paper