1. Enhancing Eye-Tracking Performance through Multi-Task Learning Transformer
- Author
-
Li, Weigeng, Zhou, Neng, and Qu, Xiaodong
- Subjects
Computer Science - Human-Computer Interaction - Abstract
In this study, we introduce an innovative EEG signal reconstruction sub-module designed to enhance the performance of deep learning models on EEG eye-tracking tasks. This sub-module can integrate with all Encoder-Classifier-based deep learning models and achieve end-to-end training within a multi-task learning framework. Additionally, as the module operates under unsupervised learning, it is versatile and applicable to various tasks. We demonstrate its effectiveness by incorporating it into advanced deep-learning models, including Transformers and pre-trained Transformers. Our results indicate a significant enhancement in feature representation capabilities, evidenced by a Root Mean Squared Error (RMSE) of 54.1mm. This represents a notable improvement over existing methods, showcasing the sub-module's potential in refining EEG-based model performance. The success of this approach suggests that this reconstruction sub-module is capable of enhancing the feature extraction ability of the encoder. Due to the sub-module being mounted as a sub-task under the main task and maintained through a multi-task learning framework, our model preserves the end-to-end training process of the original model. In contrast to pre-training methods like autoencoder, our model saves computational costs associated with pre-training and exhibits greater flexibility in adapting to various model structures. Benefiting from the unsupervised nature of the sub-module, it can be applied across diverse tasks. We believe it represents a novel paradigm for improving the performance of deep learning models in EEG-related challenges.
- Published
- 2024
- Full Text
- View/download PDF