Back to Search Start Over

MLMRS-Net: Electroencephalography (EEG) motion artifacts removal using a multi-layer multi-resolution spatially pooled 1D signal reconstruction network.

Authors :
Mahmud, Sakib
Hossain, Md Shafayet
Chowdhury, Muhammad E. H.
Reaz, Mamun Bin Ibne
Source :
Neural Computing & Applications. Apr2023, Vol. 35 Issue 11, p8371-8388. 18p.
Publication Year :
2023

Abstract

Electroencephalogram (EEG) signals suffer substantially from motion artifacts when recorded in ambulatory settings utilizing wearable sensors. Because the diagnosis of many neurological diseases is heavily reliant on clean EEG data, it is critical to eliminate motion artifacts from motion-corrupted EEG signals using reliable and robust algorithms. Although a few deep learning-based models have been proposed for the removal of ocular, muscle, and cardiac artifacts from EEG data to the best of our knowledge, there is no attempt has been made in removing motion artifacts from motion-corrupted EEG signals: In this paper, a novel 1D convolutional neural network (CNN) called multi-layer multi-resolution spatially pooled (MLMRS) network for signal reconstruction is proposed for EEG motion artifact removal. The performance of the proposed model was compared with ten other 1D CNN models: FPN, LinkNet, UNet, UNet+, UNetPP, UNet3+, AttentionUNet, MultiResUNet, DenseInceptionUNet, and AttentionUNet++ in removing motion artifacts from motion-contaminated single-channel EEG signal. All the eleven deep CNN models are trained and tested using a single-channel benchmark EEG dataset containing 23 sets of motion-corrupted and reference ground truth EEG signals from PhysioNet. Leave-one-out cross-validation method was used in this work. The performance of the deep learning models is measured using three well-known performance matrices viz. mean absolute error (MAE)-based construction error, the difference in the signal-to-noise ratio (ΔSNR), and percentage reduction in motion artifacts (η). The proposed MLMRS-Net model has shown the best denoising performance, producing an average ΔSNR, η, and MAE values of 26.64 dB, 90.52%, and 0.056, respectively, for all 23 sets of EEG recordings. The results reported using the proposed model outperformed all the existing state-of-the-art techniques in terms of average η improvement. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09410643
Volume :
35
Issue :
11
Database :
Academic Search Index
Journal :
Neural Computing & Applications
Publication Type :
Academic Journal
Accession number :
162587436
Full Text :
https://doi.org/10.1007/s00521-022-08111-6