Back to Search Start Over

@Bench: Benchmarking Vision-Language Models for Human-centered Assistive Technology

Authors :
Jiang, Xin
Zheng, Junwei
Liu, Ruiping
Li, Jiahang
Zhang, Jiaming
Matthiesen, Sven
Stiefelhagen, Rainer
Publication Year :
2024

Abstract

As Vision-Language Models (VLMs) advance, human-centered Assistive Technologies (ATs) for helping People with Visual Impairments (PVIs) are evolving into generalists, capable of performing multiple tasks simultaneously. However, benchmarking VLMs for ATs remains under-explored. To bridge this gap, we first create a novel AT benchmark (@Bench). Guided by a pre-design user study with PVIs, our benchmark includes the five most crucial vision-language tasks: Panoptic Segmentation, Depth Estimation, Optical Character Recognition (OCR), Image Captioning, and Visual Question Answering (VQA). Besides, we propose a novel AT model (@Model) that addresses all tasks simultaneously and can be expanded to more assistive functions for helping PVIs. Our framework exhibits outstanding performance across tasks by integrating multi-modal information, and it offers PVIs a more comprehensive assistance. Extensive experiments prove the effectiveness and generalizability of our framework.<br />Comment: Accepted by WACV 2025, project page: https://junweizheng93.github.io/publications/ATBench/ATBench.html

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.14215
Document Type :
Working Paper