Back to Search Start Over

EyeFormer: Predicting Personalized Scanpaths with Transformer-Guided Reinforcement Learning

Authors :
Jiang, Yue
Guo, Zixin
Tavakoli, Hamed Rezazadegan
Leiva, Luis A.
Oulasvirta, Antti
Publication Year :
2024

Abstract

From a visual perception perspective, modern graphical user interfaces (GUIs) comprise a complex graphics-rich two-dimensional visuospatial arrangement of text, images, and interactive objects such as buttons and menus. While existing models can accurately predict regions and objects that are likely to attract attention ``on average'', so far there is no scanpath model capable of predicting scanpaths for an individual. To close this gap, we introduce EyeFormer, which leverages a Transformer architecture as a policy network to guide a deep reinforcement learning algorithm that controls gaze locations. Our model has the unique capability of producing personalized predictions when given a few user scanpath samples. It can predict full scanpath information, including fixation positions and duration, across individuals and various stimulus types. Additionally, we demonstrate applications in GUI layout optimization driven by our model. Our software and models will be publicly available.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.10163
Document Type :
Working Paper