127 results on '"Wei-Long Zheng"'
Search Results
102. 767: EPILEPTIFORM ABNORMALITIES ARE ASSOCIATED WITH INCREASED MORTALITY IN ADULT ECMO PATIENTS
- Author
-
Marcos Firme, Gaston Cudemus, Kenneth Shelton, Yuval Raz, Wei-Long Zheng, Oluwaseun Johnson-Akeju, Edilberto Amorim, and M.B. Westover
- Subjects
medicine.medical_specialty ,business.industry ,Internal medicine ,Cardiology ,Medicine ,Critical Care and Intensive Care Medicine ,business - Published
- 2020
103. Multimodal Vigilance Estimation with Adversarial Domain Adaptation Networks
- Author
-
Bao-Liang Lu, He Li, and Wei-Long Zheng
- Subjects
Domain adaptation ,Computer science ,business.industry ,media_common.quotation_subject ,0206 medical engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,020601 biomedical engineering ,Adversarial system ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Vigilance (psychology) ,media_common - Abstract
Robust vigilance estimation during driving is very crucial in preventing traffic accidents. Many approaches have been proposed for vigilance estimation. However, most of the approaches require collecting subject-specific labeled data for calibration which is high-cost for real-world applications. To solve this problem, domain adaptation methods can be used to align distributions of source subject features (source domain) and new subject features (target domain). By reusing existing data from other subjects, no labeled data of new subjects is required to train models. In this paper, our goal is to apply adversarial domain adaptation networks to cross-subject vigilance estimation. We adopt two kinds of recently proposed adversarial domain adaptation networks and compare their performance with those of several traditional domain adaptation methods and the baseline without domain adaptation. A publicly available dataset, SEED-VIG, is used to evaluate the methods. The dataset includes electroencephalography (EEG) and electrooculography (EOG) signals, as well as the corresponding vigilance level annotations during simulated driving. Compared with the baseline, both adversarial domain adaptation networks achieve improvements over 10% in terms of Pearson’s correlation coefficient. In addition, both methods considerably outperform the traditional domain adaptation methods.
- Published
- 2018
104. Semi-supervised Deep Generative Modelling of Incomplete Multi-Modality Emotional Data
- Author
-
Changying Du, Bao-Liang Lu, Huiguang He, Wei-Long Zheng, Hao Wang, Jinpeng Li, and Changde Du
- Subjects
Signal Processing (eess.SP) ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,Generalization ,Gaussian ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Inference ,02 engineering and technology ,Latent variable ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,Task (project management) ,Machine Learning (cs.LG) ,symbols.namesake ,0202 electrical engineering, electronic engineering, information engineering ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Signal Processing ,Representation (mathematics) ,0105 earth and related environmental sciences ,Modality (human–computer interaction) ,business.industry ,Multimedia (cs.MM) ,ComputingMethodologies_PATTERNRECOGNITION ,symbols ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Generative grammar ,Computer Science - Multimedia - Abstract
There are threefold challenges in emotion recognition. First, it is difficult to recognize human's emotional states only considering a single modality. Second, it is expensive to manually annotate the emotional data. Third, emotional data often suffers from missing modalities due to unforeseeable sensor malfunction or configuration issues. In this paper, we address all these problems under a novel multi-view deep generative framework. Specifically, we propose to model the statistical relationships of multi-modality emotional data using multiple modality-specific generative networks with a shared latent space. By imposing a Gaussian mixture assumption on the posterior approximation of the shared latent variables, our framework can learn the joint deep representation from multiple modalities and evaluate the importance of each modality simultaneously. To solve the labeled-data-scarcity problem, we extend our multi-view model to semi-supervised learning scenario by casting the semi-supervised classification problem as a specialized missing data imputation task. To address the missing-modality problem, we further extend our semi-supervised multi-view model to deal with incomplete data, where a missing view is treated as a latent variable and integrated out during inference. This way, the proposed overall framework can utilize all available (both labeled and unlabeled, as well as both complete and incomplete) data to improve its generalization ability. The experiments conducted on two real multi-modal emotion datasets demonstrated the superiority of our framework., Comment: arXiv admin note: text overlap with arXiv:1704.07548, 2018 ACM Multimedia Conference (MM'18)
- Published
- 2018
- Full Text
- View/download PDF
105. Active Feedback Framework with Scan-Path Clustering for Deep Affective Models
- Author
-
Wei-Long Zheng, Xin-Wei Li, Bao-Liang Lu, and Li-Ming Zhao
- Subjects
0209 industrial biotechnology ,medicine.diagnostic_test ,Computer science ,Speech recognition ,02 engineering and technology ,Filter (signal processing) ,Electroencephalography ,Affect (psychology) ,Task (project management) ,ComputingMethodologies_PATTERNRECOGNITION ,020901 industrial engineering & automation ,Outlier ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Eye tracking ,020201 artificial intelligence & image processing ,Cluster analysis - Abstract
The attention of subjects to EEG-based emotion recognition experiments could seriously affect their emotion induction level and annotation quality of EEG data. Therefore, it is important to evaluate the raw EEG data before training the classification model. In this paper, we propose a framework to filter out low quality EEG data from participants with low attention using eye tracking data and boost the performance of deep affective models with CNN and LSTM. We introduce a novel attention-deprived experiment with dual tasks, in which the dominant task is auditory continuous performance test, identical pairs version (CPT-IP) and the subtask is emotion eliciting experiment. Motivated by the idea that subjects with attention share similar scan-path patterns under the same clips, we adopt the cosine distance based spatial-temporal scan-path analysis with eye tracking data to cluster these similar scan-paths. The average accuracy of emotion recognition using the selected EEG data with attention is about 3% higher than that of original training dataset without filtering. We also found that with the increasing distance of scan-paths between outliers and cluster center, the performance of corresponding EEG data tends to decrease.
- Published
- 2018
106. WGAN Domain Adaptation for EEG-Based Emotion Recognition
- Author
-
Bao-Liang Lu, Wei-Long Zheng, Si-Yang Zhang, and Yun Luo
- Subjects
Domain adaptation ,medicine.diagnostic_test ,Computer science ,business.industry ,Feature vector ,Stability (learning theory) ,Pattern recognition ,02 engineering and technology ,Electroencephalography ,Domain (software engineering) ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Emotion recognition ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
In this paper, we propose a novel Wasserstein generative adversarial network domain adaptation (WGANDA) framework for building cross-subject electroencephalography (EEG)-based emotion recognition models. The proposed framework consists of GANs-like components and a two-step training procedure with pre-training and adversarial training. Pre-training is to map source domain and target domain to a common feature space, and adversarial-training is to narrow down the gap between the mappings of the source and target domains on the common feature space. A Wasserstein GAN gradient penalty loss is applied to adversarial-training to guarantee the stability and convergence of the framework. We evaluate the framework on two public EEG datasets for emotion recognition, SEED and DEAP. The experimental results demonstrate that our WGANDA framework successfully handles the domain shift problem in cross-subject EEG-based emotion recognition and significantly outperforms the state-of-the-art domain adaptation methods.
- Published
- 2018
107. EEG-based emotion recognition using domain adaptation network
- Author
-
Wei-Long Zheng, Bao-Liang Lu, Yi-Ming Jin, and Yudong Luo
- Subjects
Artificial neural network ,Computer science ,Speech recognition ,Feature extraction ,020206 networking & telecommunications ,02 engineering and technology ,Visualization ,Support vector machine ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,020201 artificial intelligence & image processing ,Layer (object-oriented design) ,Knowledge transfer ,Subspace topology - Abstract
This paper explores a fundamental problem of eliminating the differences between source subject and target subject in EEG-based emotion recognition. The major limitation of traditional classification methods is that the lack of domain adaptation and subspace alignment will degrade the performance of cross-subject emotion recognition. To address this problem, we adopt Domain Adaptation Network (DAN) for knowledge transfer, which maintains both feature discriminativeness and domain-invariance during training stage. A feed-forward neural network is constructed by augmenting a few standard layers and a gradient reversal layer. Compared with five traditional methods, DAN outperforms its counterparts and achieves the mean accuracy of 79.19%. Moreover, a visualization of the features learned by DAN is represented in this paper, which intuitively describes the transfer virtue of domain adaptation network.
- Published
- 2017
108. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks
- Author
-
Bao-Liang Lu and Wei-Long Zheng
- Subjects
medicine.diagnostic_test ,business.industry ,Speech recognition ,Feature extraction ,Pattern recognition ,Electroencephalography ,Support vector machine ,Differential entropy ,Deep belief network ,Critical frequency ,Artificial Intelligence ,medicine ,Entropy (information theory) ,Artificial intelligence ,Emotion recognition ,business ,Psychology ,Software - Abstract
To investigate critical frequency bands and channels, this paper introduces deep belief networks (DBNs) to constructing EEG-based emotion recognition models for three emotions: positive, neutral and negative. We develop an EEG dataset acquired from 15 subjects. Each subject performs the experiments twice at the interval of a few days. DBNs are trained with differential entropy features extracted from multichannel EEG data. We examine the weights of the trained DBNs and investigate the critical frequency bands and channels. Four different profiles of 4, 6, 9, and 12 channels are selected. The recognition accuracies of these four profiles are relatively stable with the best accuracy of 86.65%, which is even better than that of the original 62 channels. The critical frequency bands and channels determined by using the weights of trained DBNs are consistent with the existing observations. In addition, our experiment results show that neural signatures associated with different emotions do exist and they share commonality across sessions and individuals. We compare the performance of deep models with shallow models. The average accuracies of DBN, SVM, LR, and KNN are 86.08%, 83.99%, 82.70%, and 72.60%, respectively.
- Published
- 2015
109. Neural patterns between Chinese and Germans for EEG-based emotion recognition
- Author
-
Bao-Liang Lu, Hiroshi Yokoi, Moritz Schaefer, Wei-Long Zheng, and Si-Yuan Wu
- Subjects
medicine.diagnostic_test ,Speech recognition ,05 social sciences ,Feature extraction ,050301 education ,02 engineering and technology ,Electroencephalography ,language.human_language ,Support vector machine ,German ,Critical band ,Mood ,Eeg data ,0202 electrical engineering, electronic engineering, information engineering ,language ,medicine ,020201 artificial intelligence & image processing ,Emotion recognition ,Psychology ,0503 education - Abstract
This paper aims to explore the neural patterns between Chinese and Germans for electroencephalogram (EEG)-based emotion recognition. Both Chinese and German subjects, wearing electrode caps, watched video stimuli that triggered positive, neutral, and negative emotions. Two emotion classifiers are trained on Chinese EEG data and German EEG data, respectively. The experiment results indicate that: a) German neural patterns are basically in accordance with Chinese ones; b) the main difference lies in the upper temporal region in Delta band which activates more when a German is in positive mood; and c) the Chinese positive emotion achieves the best accuracy while German emotions share the approximate accuracy. Moreover, Gamma band serves as the critical band for both German and Chinese emotion recognition.
- Published
- 2017
110. Emotion Annotation Using Hierarchical Aligned Cluster Analysis
- Author
-
Sheng Fang, Bao-Liang Lu, Wei-Long Zheng, Wei-Ye Zhao, Qian Ji, and Ting Ji
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Supervised learning ,Pattern recognition ,02 engineering and technology ,Disease cluster ,Annotation ,ComputingMethodologies_PATTERNRECOGNITION ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Time series ,Cluster analysis ,business - Abstract
The correctness of annotation is quite important in supervised learning, especially in electroencephalography(EEG)-based emotion recognition. The conventional EEG annotations for emotion recognition are based on the feedback like questionnaires about emotion elicitation from subjects. However, these methods are subjective and divorced from experiment data, which lead to inaccurate annotations. In this paper, we pose the problem of annotation optimization as temporal clustering one. We mainly explore two types of clustering algorithms: aligned clustering analysis (ACA) and hierarchical aligned clustering analysis (HACA). We compare the performance of questionnaire-based, ACA-based, HACA-based annotation on a public EEG dataset called SEED. The experimental results demonstrate that our proposed ACA-based and HACA-based annotation achieve an accuracy improvement of \(2.59\%\) and \(4.53\%\) in average, respectively, which shows their effectiveness for emotion recognition.
- Published
- 2017
111. Investigating Gender Differences of Brain Areas in Emotion Recognition Using LSTM Neural Network
- Author
-
Bao-Liang Lu, Wei Liu, Xue Yan, and Wei-Long Zheng
- Subjects
medicine.medical_specialty ,Artificial neural network ,medicine.diagnostic_test ,02 engineering and technology ,Audiology ,Electroencephalography ,Lateralization of brain function ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Time dependency ,Emotion recognition ,Psychology ,030217 neurology & neurosurgery - Abstract
In this paper, we investigate key brain areas of men and women using electroencephalography (EEG) data on recognising three emotions, namely happy, sad and neutral. Considering that emotion changes over time, Long Short-Term Memory (LSTM) neural network is adopted with its capacity of capturing time dependency. Our experimental results indicate that the neural patterns of different emotions have specific key brain areas for males and females, with females showing right lateralization and males being more left lateralized. Accordingly, two non-overlapping brain regions are selected for two genders. The classification accuracy for females (79.14%) using the right lateralized region is significantly higher than that for males (67.61%), and the left lateralized area educes a significantly higher classification accuracy for males (82.54%) than females (73.51%), especially for happy and sad emotions.
- Published
- 2017
112. A novel and environment-friendly bioprocess of 1,3-propanediol fermentation integrated with aqueous two-phase extraction by ethanol/sodium carbonate system
- Author
-
Wei-Long Zheng, Zhigang Li, Yaqin Sun, Zhilong Xiu, and Hu Teng
- Subjects
Environmental Engineering ,Aqueous solution ,Chromatography ,Formic acid ,Extraction (chemistry) ,Biomedical Engineering ,Bioengineering ,Lactic acid ,chemistry.chemical_compound ,Acetic acid ,chemistry ,Biochemistry ,Carbon dioxide ,Fermentation ,Sodium carbonate ,Biotechnology - Abstract
An integrated fermentation–separation process for the production of 1,3-propanediol (1,3-PD) was investigated. Aqueous two-phase system (ATPS) not only recovered 97.9% of 1,3-PD, but simultaneously also removed 99.1% cells, 81.9% proteins, 75.5% organic acids, and 78.7% water. Furthermore, after extraction the bottom phase of ATPS was used to adjust the pH of the culture during fermentation, leading to 16% and 126% increases in the concentrations of 1,3-PD and lactic acid, and dramatic decreases in the concentration of acetic acid and formic acid. The total mass conversion yield of three main products (1,3-PD, 2,3-butanediol, and lactic acid) from glycerol reached 81.6%. The salt-enriched phase could also be used to absorb carbon dioxide (CO2), resulting in 94% recovery for carbonate. Finally, process simulation using the program PRO/II showed the use of ATPS reduced 75.1% of the energy expenditure and 89.0% of CO2 emissions.
- Published
- 2013
113. Driving fatigue detection with fusion of EEG and forehead EOG
- Author
-
Wei-Long Zheng, Bao-Liang Lu, and Xue-Qin Huo
- Subjects
genetic structures ,medicine.diagnostic_test ,business.industry ,Computer science ,010401 analytical chemistry ,Feature extraction ,Eye movement ,02 engineering and technology ,Electrooculography ,01 natural sciences ,0104 chemical sciences ,medicine.anatomical_structure ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Forehead ,Eye tracking ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Eye closure ,business ,Extreme learning machine - Abstract
In this paper, we fuse EEG and forehead EOG to detect drivers' fatigue level by using discriminative graph regularized extreme learning machine (GELM). Twenty-one healthy subjects including twelve men and nine women participate in our driving simulation experiments. Two fusion strategies are adopted: feature level fusion (FLF) and decision level fusion (DLF). PERCLOS (the percentage of eye closure) is calculated by using the eye movement data recorded by eye tracking glasses as the indicator of drivers' fatigue level. The prediction correlation coefficient and root mean square error (RMSE) between the estimated fatigue level and the real fatigue level are both used to evaluate the performance of single modality and fusion modality. A comparative study on modality performance is conducted between GELM and support vector machine (SVM). The experimental results show that fusion modality can improve the performance of driving fatigue detection with a higher prediction correlation coefficient and a lower RMSE value in comparison with solely using EEG or forehead EOG. And FLF achieves better performance than DLF. GELM is more suitable for driving fatigue detection than SVM. Moreover, feature level fusion with GELM achieves the best performance with the prediction correlation coefficient of 0.8080 and the RMSE value of 0.0712 on average.
- Published
- 2016
114. Measuring sleep quality from EEG with machine learning approaches
- Author
-
Bao-Liang Lu, Lili Wang, Wei-Long Zheng, and Haiwei Ma
- Subjects
medicine.diagnostic_test ,Computer science ,business.industry ,Speech recognition ,Pattern recognition ,02 engineering and technology ,Electroencephalography ,Machine learning ,computer.software_genre ,Sleep in non-human animals ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,030217 neurology & neurosurgery ,Energy (signal processing) ,Extreme learning machine - Abstract
This study aims at measuring last-night sleep quality from electroencephalography (EEG). We design a sleep experiment to collect waking EEG signals from eight subjects under three different sleep conditions: 8 hours sleep, 6 hours sleep, and 4 hours sleep. We utilize three machine learning approaches, k-Nearest Neighbor (kNN), support vector machine (SVM), and discriminative graph regularized extreme learning machine (GELM), to classify extracted EEG features of power spectral density (PSD). The accuracies of these three classifiers without feature selection are 36.68%, 48.28%, 62.16%, respectively. By using minimal-redundancy-maximal-relevance (MRMR) algorithm and the brain topography, the classification accuracy of GELM with 9 features is improved largely and increased to 83.57% in average. To investigate critical frequency bands for measuring sleep quality, we examine the features of each band and observe their energy changing. The experimental results indicate that Gamma band is more relevant to measuring sleep quality.
- Published
- 2016
115. Emotion Recognition Using Multimodal Deep Learning
- Author
-
Bao-Liang Lu, Wei-Long Zheng, and Wei Liu
- Subjects
business.industry ,Computer science ,Property (programming) ,Speech recognition ,Deep learning ,02 engineering and technology ,Autoencoder ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Emotion recognition ,Artificial intelligence ,business ,Representation (mathematics) ,Construct (philosophy) ,030217 neurology & neurosurgery ,Complement (set theory) - Abstract
To enhance the performance of affective models and reduce the cost of acquiring physiological signals for real-world applications, we adopt multimodal deep learning approach to construct affective models with SEED and DEAP datasets to recognize different kinds of emotions. We demonstrate that high level representation features extracted by the Bimodal Deep AutoEncoder BDAE are effective for emotion recognition. With the BDAE network, we achieve mean accuracies of 91.01i¾?% and 83.25i¾?% on SEED and DEAP datasets, respectively, which are much superior to those of the state-of-the-art approaches. By analysing the confusing matrices, we found that EEG and eye features contain complementary information and the BDAE network could fully take advantage of this complement property to enhance emotion recognition.
- Published
- 2016
116. Transfer components between subjects for EEG-based emotion recognition
- Author
-
Jia-Yi Zhu, Bao-Liang Lu, Yong-Qi Zhang, and Wei-Long Zheng
- Subjects
medicine.diagnostic_test ,business.industry ,Speech recognition ,Feature extraction ,Pattern recognition ,Electroencephalography ,Component analysis ,Kernel (image processing) ,Principal component analysis ,medicine ,Emotion recognition ,Artificial intelligence ,Transfer of learning ,Psychology ,business ,Subspace topology - Abstract
Addressing the structural and functional variability between subjects for robust affective brain-computer interface (aBCI) is challenging but of great importance, since the calibration phase for aBCI is time-consuming. In this paper, we propose a subject transfer framework for electroencephalogram (EEG)-based emotion recognition via component analysis. We compare two state-of-the-art subspace projecting approaches called transfer component analysis (TCA) and kernel principle component analysis (KPCA) for subject transfer. The main idea is to learn a set of transfer components underlying source domain (source subjects) and target domain (target subject). When projected to this subspace, the difference of feature distributions of both domains can be reduced. From the experiments, we show that the two proposed approaches, TCA and KPCA, can achieve an improvement on performance with the best mean accuracies of 71.80% and 79.83%, respectively, in comparison of the baseline of 58.95%. The significant improvement shows the feasibility and efficiency of our approaches for subject transfer emotion recognition from EEG signals.
- Published
- 2015
117. A novel approach to driving fatigue detection using forehead EOG
- Author
-
Bao-Liang Lu, Yu-Fei Zhang, Jia-Yi Zhu, Xiang-Yu Gao, and Wei-Long Zheng
- Subjects
genetic structures ,Correlation coefficient ,business.industry ,Computer science ,Electrooculograms ,eye diseases ,Support vector machine ,medicine.anatomical_structure ,Forehead ,medicine ,Eye tracking ,Computer vision ,sense organs ,Artificial intelligence ,business ,Eye closure ,Electrode placement ,Around eyes - Abstract
Various studies have shown that the traditional electrooculograms (EOGs) are effective for driving fatigue detection. However, the electrode placement of the traditional EOG recording method is around eyes, which may disturb the subjects' activities, and is not convenient for practical applications. To deal with this problem, we propose a novel electrode placement on forehead and present an effective method to extract horizon electrooculogram (HEO) and vertical electrooculogram (VEO) from forehead EOG. The correlation coefficients between the extracted HEO and VEO and the corresponding traditional HEO and VEO are 0.86 and 0.78, respectively. To alleviate the inconvenience of manually labelling fatigue states, we use the videos recorded by eye tracking glasses to calculate the percentage of eye closure over time, which is a conventional indicator of driving fatigue. We use support vector machine (SVM) for regression analysis and get a rather high prediction correlation coefficient of 0.88 on average.
- Published
- 2015
118. Evaluating driving fatigue detection algorithms using eye tracking glasses
- Author
-
Bao-Liang Lu, Wei-Long Zheng, Yu-Fei Zhang, and Xiang-Yu Gao
- Subjects
Ground truth ,genetic structures ,medicine.diagnostic_test ,business.industry ,media_common.quotation_subject ,Feature extraction ,Electrooculography ,Electro-oculography ,eye diseases ,Pupil ,medicine ,Eye tracking ,Computer vision ,sense organs ,Artificial intelligence ,Eye closure ,business ,Psychology ,Algorithm ,Vigilance (psychology) ,media_common - Abstract
Fatigue is a status of human brain activities, and driving fatigue detection is a topic of great interest all over the world. In this paper, we propose a measure of fatigue produced by eye tracking glasses, and use it as the ground truth to evaluate driving fatigue detection algorithms. Particularly, PERCLOS, which is the percentage of eye closure over the pupil over time, was calculated from eyelid movement data provided by eye tracking glasses. Experiments of a vigilance task were carried out in which both EOG signals and eyelid movement were recorded. The evaluation results of an effective EOG-based fatigue detection algorithm convinced us that our proposed measure is an appropriate candidate for evaluating driving fatigue detection algorithms.
- Published
- 2015
119. Revealing critical channels and frequency bands for emotion recognition from EEG with deep belief network
- Author
-
Bao-Liang Lu, Wei-Long Zheng, and Hao-Tian Guo
- Subjects
medicine.diagnostic_test ,business.industry ,Speech recognition ,Pattern recognition ,Electroencephalography ,Radio spectrum ,Support vector machine ,Differential entropy ,Deep belief network ,Weight distribution ,medicine ,Artificial intelligence ,Noise (video) ,business ,Psychology ,Communication channel - Abstract
For EEG-based emotion recognition tasks, there are many irrelevant channel signals contained in multichannel EEG data, which may cause noise and degrade the performance of emotion recognition systems. In order to tackle this problem, we propose a novel deep belief network (DBN) based method for examining critical channels and frequency bands in this paper. First, we design an emotion experiment and collect EEG data while subjects are watching emotional film clips. Then we train DBN for recognizing three emotions (positive, neutral, and negative) with extracted differential entropy features as input and compare DBN with other shallow models such as KNN, LR, and SVM. The experiment results show that DBN achieves the best average accuracy of 86.08%. We further explore critical channels and frequency bands by examining the weight distribution learned by DBN, which is different from the existing work. We identify four profiles with 4, 6, 9 and 12 channels, which achieve recognition accuracies of 82.88%, 85.03%, 84.02%, 86.65%, respectively, using SVM.
- Published
- 2015
120. EEG-based emotion recognition with manifold regularized extreme learning machine
- Author
-
Bao-Liang Lu, Yong Peng, Jia-Yi Zhu, and Wei-Long Zheng
- Subjects
Adult ,Male ,Channel (digital image) ,Frequency band ,Speech recognition ,Emotions ,Electroencephalography ,Radio spectrum ,Differential entropy ,Machine Learning ,Young Adult ,Discriminative model ,medicine ,Humans ,Extreme learning machine ,medicine.diagnostic_test ,business.industry ,Pattern recognition ,Support vector machine ,Pattern Recognition, Physiological ,Female ,Artificial intelligence ,business ,Psychology ,Algorithms - Abstract
EEG signals, which can record the electrical activity along the scalp, provide researchers a reliable channel for investigating human emotional states. In this paper, a new algorithm, manifold regularized extreme learning machine (MRELM), is proposed for recognizing human emotional states (positive, neutral and negative) from EEG data, which were previously evoked by watching different types of movie clips. The MRELM can simultaneously consider the geometrical structure and discriminative information in EEG data. Using differential entropy features across whole five frequency bands, the average accuracy of MRELM is 81.01%, which is better than those obtained by GELM (80.25%) and SVM (76.62%). The accuracies obtained from high frequency band features (β, γ) are obviously superior to those of low frequency band features, which shows β and γ bands are more relevant to emotional states transition. Moreover, experiments are conducted to further evaluate the efficacy of MRELM, where the training and test sets are from different sessions. The results demonstrate that the proposed MRELM is a competitive model for EEG-based emotion recognition.
- Published
- 2015
121. Multimodal emotion recognition using EEG and eye tracking data
- Author
-
Bo-Nan Dong, Wei-Long Zheng, and Bao-Liang Lu
- Subjects
Male ,medicine.diagnostic_test ,Eye Movements ,Speech recognition ,Emotions ,Eye movement ,Brain ,Iris ,Electroencephalography ,Tracing ,Models, Psychological ,Multimodal Imaging ,Pattern Recognition, Automated ,Young Adult ,Pattern recognition (psychology) ,Pupillary response ,Feature (machine learning) ,medicine ,Eye tracking ,Humans ,Female ,Emotion recognition ,Psychology ,Algorithms - Abstract
This paper presents a new emotion recognition method which combines electroencephalograph (EEG) signals and pupillary response collected from eye tracker. We select 15 emotional film clips of 3 categories (positive, neutral and negative). The EEG signals and eye tracking data of five participants are recorded, simultaneously, while watching these videos. We extract emotion-relevant features from EEG signals and eye tracing data of 12 experiments and build a fusion model to improve the performance of emotion recognition. The best average accuracies based on EEG signals and eye tracking data are 71.77% and 58.90%, respectively. We also achieve average accuracies of 73.59% and 72.98% for feature level fusion strategy and decision level fusion strategy, respectively. These results show that both feature level fusion and decision level fusion combining EEG signals and eye tracking data can improve the performance of emotion recognition model.
- Published
- 2015
122. Cross-subject and Cross-gender Emotion Classification from EEG
- Author
-
Bao-Liang Lu, Wei-Long Zheng, and Jia-Yi Zhu
- Subjects
Differential entropy ,medicine.diagnostic_test ,Speech recognition ,Emotion classification ,medicine ,Feature (machine learning) ,Electroencephalography ,Psychology ,Set (psychology) ,Smoothing ,Brain–computer interface ,Test data - Abstract
This paper aims to explore whether different persons share similar patterns for EEG changing with emotions and examine the performance of cross-subject and crossgender emotion classification from EEG. Movie clips are used to evoke three emotional states: positive, neutral, and negative. We adopt differential entropy (DE) as features, and apply linear dynamic system (LDS) to do feature smoothing. The average cross-subject classification accuracy is 64.82% with five frequency bands using data from 14 subjects as training set and data from the rest one subject as testing set. With the training set expanding from one subject to 14 subjects, the average accuracy will then continuously increase. Moreover, fuzzy-integralbased combination method is used to combine models across frequency bands and the average accuracy of 72.82% is obtained. The better performance of using training and testing data both from female subjects partly implies that there should be gender differences in EEG patterns when processing emotions.
- Published
- 2015
123. A multimodal approach to estimating vigilance using EEG and forehead EOG
- Author
-
Bao-Liang Lu and Wei-Long Zheng
- Subjects
FOS: Computer and information sciences ,Adult ,Male ,Conditional random field ,genetic structures ,media_common.quotation_subject ,Speech recognition ,Computer Science - Human-Computer Interaction ,Biomedical Engineering ,Poison control ,02 engineering and technology ,Electroencephalography ,Sensitivity and Specificity ,Human-Computer Interaction (cs.HC) ,Pattern Recognition, Automated ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Humans ,media_common ,medicine.diagnostic_test ,Reproducibility of Results ,Eye movement ,Electrooculography ,Brain Waves ,ComputingMethodologies_PATTERNRECOGNITION ,medicine.anatomical_structure ,Forehead ,Eye tracking ,Female ,020201 artificial intelligence & image processing ,sense organs ,Arousal ,Psychology ,Algorithms ,Psychomotor Performance ,030217 neurology & neurosurgery ,Vigilance (psychology) - Abstract
Objective. Covert aspects of ongoing user mental states provide key context information for user-aware human computer interactions. In this paper, we focus on the problem of estimating the vigilance of users using EEG and EOG signals. Approach. To improve the feasibility and wearability of vigilance estimation devices for real-world applications, we adopt a novel electrode placement for forehead EOG and extract various eye movement features, which contain the principal information of traditional EOG. We explore the effects of EEG from different brain areas and combine EEG and forehead EOG to leverage their complementary characteristics for vigilance estimation. Considering that the vigilance of users is a dynamic changing process because the intrinsic mental states of users involve temporal evolution, we introduce continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency. Main results. We propose a multimodal approach to estimating vigilance by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. The experimental results demonstrate that modality fusion can improve the performance compared with a single modality, EOG and EEG contain complementary information for vigilance estimation, and the temporal dependency-based models can enhance the performance of vigilance estimation. From the experimental results, we observe that theta and alpha frequency activities are increased, while gamma frequency activities are decreased in drowsy states in contrast to awake states. Significance. The forehead setup allows for the simultaneous collection of EEG and EOG and achieves comparative performance using only four shared electrodes in comparison with the temporal and posterior sites., Comment: 15 pages, 11 figures
- Published
- 2017
124. EOG-based drowsiness detection using convolutional neural networks
- Author
-
Xiaoping Chen, Bao-Liang Lu, Wei-Long Zheng, Shanguang Chen, Chun-Hui Wang, and Xuemin Zhu
- Subjects
medicine.diagnostic_test ,Computer science ,business.industry ,Property (programming) ,Feature extraction ,Pattern recognition ,Electrooculography ,Convolutional neural network ,ComputingMethodologies_PATTERNRECOGNITION ,Linear regression ,medicine ,Unsupervised learning ,Artificial intelligence ,business - Abstract
This study provides a new application of convolutional neural networks for drowsiness detection based on electrooculography (EOG) signals. Drowsiness is charged to be one of the major causes of traffic accidents. Such application is helpful to reduce losses of casualty and property. Most attempts at drowsiness detection based on EOG involve a feature extraction step, which is accounted as time-consuming task, and it is difficult to extract effective features. In this paper, an unsupervised learning is proposed to estimate driver fatigue based on EOG. A convolutional neural network with a linear regression layer is applied to EOG signals in order to avoid using of manual features. With a postprocessing step of linear dynamic system (LDS), we are able to capture the physiological status shifting. The performance of the proposed model is evaluated by the correlation coefficients between the final outputs and the local error rates of the subjects. Compared with the results of a manual ad-hoc feature extraction approach, our method is proven to be effective for drowsiness detection.
- Published
- 2014
125. EEG-based emotion recognition using discriminative graph regularized extreme learning machine
- Author
-
Ruo-Nan Duan, Jia-Yi Zhu, Bao-Liang Lu, Wei-Long Zheng, and Yong Peng
- Subjects
medicine.diagnostic_test ,Computer science ,business.industry ,Speech recognition ,Pattern recognition ,Electroencephalography ,Differential entropy ,Support vector machine ,Discriminative model ,medicine ,Artificial intelligence ,Emotion recognition ,business ,Classifier (UML) ,Extreme learning machine - Abstract
This study aims at finding the relationship be- tween EEG signals and human emotional states. Movie clips are used as stimuli to evoke positive, neutral and negative emotions of subjects. We introduce a new effective classifier named discriminative graph regularized extreme learning machine (GELM) for EEG-based emotion recognition. The average classification accuracy of GELM using differential entropy (DE) features on the whole five frequency bands is 80.25%, while the accuracy of SVM is 76.62%. These results indicate that GELM is more suitable for emotion recognition than SVM. Additionally, the accuracies of GELM using DE features on Beta and Gamma bands are 79.07%, 79.93% respectively. This suggests that these two bands are more relevant to emotion. The experimental results indicate that the EEG patterns for emotion are generally stable among different experiments and subjects. By using minimal-redundancy-maximal-relevance (MRMR) al- gorithm and correlation coefficients to select effective features, we get the distribution of top 20 subject-independent features and build a manifold model to monitor the trajectory of emotion changes with time.
- Published
- 2014
126. Multimodel emotion analysis in response to multimedia
- Author
-
Jia-Yi Zhu, Wei-Long Zheng, and Bao-Liang Lu
- Subjects
Support vector machine ,Multimedia ,medicine.diagnostic_test ,Computer science ,Speech recognition ,medicine ,Eye tracking ,Electroencephalography ,computer.software_genre ,Affective computing ,computer ,Classifier (UML) - Abstract
In this demo paper, we designed a novel framework combining EEG and eye tracking signals to analyze users' emotional activities in response to multimedia. To realize the proposed framework, we extracted efficient features of EEG and eye tracking signals and used support vector machine as classifier. We combined multimodel features using feature-level fusion and decision-level fusion to classify three emotional categories (positive, neutral and negative), which can achieve the average accuracies of 75.62% and 74.92%, respectively. We investigated the brain activities that are associated with emotions. Our experimental results indicated there exist stable common patterns and activated areas of the brain associated with positive and negative emotions. In the demo, we also showed the trajectory of emotion changes in response to multimedia.
- Published
- 2014
127. A multimodal approach to estimating vigilance using EEG and forehead EOG.
- Author
-
Wei-Long Zheng and Bao-Liang Lu
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.