101. Real-Time Estimation of Eye Movement Condition Using a Deep Learning Model
- Author
-
Kunihiko Tanaka, Yoshiki Itazu, Akihiro Sugiura, and Hiroki Takada
- Subjects
Linear function (calculus) ,Computer science ,business.industry ,Deep learning ,Peripheral vision ,Eye movement ,Computer vision ,Sigmoid function ,Artificial intelligence ,Layer (object-oriented design) ,business ,Convolutional neural network ,Convolution - Abstract
In this study, we conducted a basic investigation involving the discrimination of eye movement condition (peripheral and central vision) using deep learning techniques. The subjects were 6 males aged 21–23 years. They watched two three-minute videos for central vision and peripheral vision in a random order for a total of eight sessions (four sessions each). The subjects wore an eye movement measurement device, and their eye movements (viewing angles) during the viewing of each video were continuously. From the time series data for eye movement, with four different lengths (0.5 s, 1 s, 2 s, 3 s) and shift length of 0.5 s, short time series data for each 3 min was obtained in sets of 350, and the data were utilized for deep learning and its evaluation. For the deep learning model, input nodes according to data length were placed in the input layer. For the middle layer, seven to eight units were put in place that brought together the one-dimensional convolution layer, the batch-normalization layer, normalized linear function, and the max-pooling layer. The output layer consisted of the fully-connected layer, sigmoid function, and multi-class cross-entropy. As a result, the accuracy of the discrimination was improved as the data length increased, and it was possible to determine the condition with an accuracy of over 90% if the eye movement data was at least one second.
- Published
- 2021