Back to Search Start Over

Comparison of LSTM, Transformers, and MLP-mixer neural networks for gaze based human intention prediction.

Authors :
Pettersson, Julius
Falkman, Petter
Source :
Frontiers in Neurorobotics; 2023, p1-18, 18p
Publication Year :
2023

Abstract

Collaborative robots have gained popularity in industries, providing flexibility and increased productivity for complex tasks. However, their ability to interact with humans and adapt to their behavior is still limited. Prediction of humanmovement intentions is one way to improve the robots adaptation. This paper investigates the performance of using Transformers and MLP-Mixer based neural networks to predict the intended human arm movement direction, based on gaze data obtained in a virtual reality environment, and compares the results to using an LSTM network. The comparison will evaluate the networks based on accuracy on several metrics, time ahead of movement completion, and execution time. It is shown in the paper that there exists several network configurations and architectures that achieve comparable accuracy scores. The best performing Transformers encoder presented in this paper achieved an accuracy of 82.74%, for predictions with high certainty, on continuous data and correctly classifies 80.06% of the movements at least once. The movements are, in 99% of the cases, correctly predicted the first time, before the hand reaches the target and more than 19% ahead of movement completion in 75% of the cases. The results shows that there are multiple ways to utilize neural networks to perform gaze based arm movement intention prediction and it is a promising step toward enabling efficient human-robot collaboration. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
16625218
Database :
Complementary Index
Journal :
Frontiers in Neurorobotics
Publication Type :
Academic Journal
Accession number :
164210124
Full Text :
https://doi.org/10.3389/fnbot.2023.1157957