1. Do Text-to-Vis Benchmarks Test Real Use of Visualisations?
- Author
-
Nguyen, Hy, He, Xuefei, Reeson, Andrew, Paris, Cecile, Poon, Josiah, and Kummerfeld, Jonathan K.
- Subjects
Computer Science - Computation and Language ,Computer Science - Human-Computer Interaction - Abstract
Large language models are able to generate code for visualisations in response to simple user requests. This is a useful application and an appealing one for NLP research because plots of data provide grounding for language. However, there are relatively few benchmarks, and those that exist may not be representative of what users do in practice. This paper investigates whether benchmarks reflect real-world use through an empirical study comparing benchmark datasets with code from public repositories. Our findings reveal a substantial gap, with evaluations not testing the same distribution of chart types, attributes, and actions as real-world examples. One dataset is representative, but requires extensive modification to become a practical end-to-end benchmark. This shows that new benchmarks are needed to support the development of systems that truly address users' visualisation needs. These observations will guide future data creation, highlighting which features hold genuine significance for users., Comment: Accepted to EMNLP 2024 more...
- Published
- 2024