1. Extraction and classification of tempo stimuli from electroencephalography recordings using convolutional recurrent attention model
- Author
-
Gi Yong Lee, Min‐Soo Kim, and Hyoung‐Gook Kim
- Subjects
attention mechanism ,convolutional recurrent neural network ,electroencephalography ,spatiotemporal features ,tempo stimuli classification ,Telecommunication ,TK5101-6720 ,Electronics ,TK7800-8360 - Abstract
AbstractElectroencephalography (EEG) recordings taken during the perception of music tempo contain information that estimates the tempo of a music piece. If information about this tempo stimulus in EEG recordings can be extracted and classified, it can be effectively used to construct a music‐based brain–computer interface. This study proposes a novel convolutional recurrent attention model (CRAM) to extract and classify features corresponding to tempo stimuli from EEG recordings of listeners who listened with concentration to the tempo of musics. The proposed CRAM is composed of six modules, namely, network inputs, two‐dimensional convolutional bidirectional gated recurrent unit‐based sample encoder, sample‐level intuitive attention, segment encoder, segment‐level intuitive attention, and softmax layer, to effectively model spatiotemporal features and improve the classification accuracy of tempo stimuli. To evaluate the proposed method's performance, we conducted experiments on two benchmark datasets. The proposed method achieves promising results, outperforming recent methods.
- Published
- 2021
- Full Text
- View/download PDF