6 results on '"Xinqi Fan"'
Search Results
2. A Deep Learning Based Light-Weight Face Mask Detector With Residual Context Attention and Gaussian Heatmap to Fight Against COVID-19
- Author
-
Hong Yan, Mingjie Jiang, and Xinqi Fan
- Subjects
General Computer Science ,Computer science ,business.industry ,Deep learning ,Feature extraction ,General Engineering ,Process (computing) ,residual context attention ,Context (language use) ,Pattern recognition ,Residual ,Facial recognition system ,TK1-9971 ,coronavirus disease 2019 ,synthesized Gaussian heat map regression ,Face (geometry) ,General Materials Science ,Artificial intelligence ,Electrical engineering. Electronics. Nuclear engineering ,Focus (optics) ,business ,Face mask detection - Abstract
Coronavirus disease 2019 has seriously affected the world. One major protective measure for individuals is to wear masks in public areas. Several regions applied a compulsory mask-wearing rule in public areas to prevent transmission of the virus. Few research studies have examined automatic face mask detection based on image analysis. In this paper, we propose a deep learning based single-shot light-weight face mask detector to meet the low computational requirements for embedded systems, as well as achieve high performance. To cope with the low feature extraction capability caused by the light-weight model, we propose two novel methods to enhance the model’s feature extraction process. First, to extract rich context information and focus on crucial face mask related regions, we propose a novel residual context attention module. Second, to learn more discriminating features for faces with and without masks, we introduce a novel auxiliary task using synthesized Gaussian heat map regression. Ablation studies show that these methods can considerably boost the feature extraction ability and thus increase the final detection performance. Comparison with other models shows that the proposed model achieves state-of-the-art results on two public datasets, the AIZOO and Moxa3K face mask datasets. In particular, compared with another light-weight you only look once version 3 tiny model, the mean average precision of our model is 1.7% higher on the AIZOO dataset, and 10.47% higher on the Moxa3K dataset. Therefore, the proposed model has a high potential to contribute to public health care and fight against the coronavirus disease 2019 pandemic.
- Published
- 2021
3. Facial Micro-Expression Generation based on Deep Motion Retargeting and Transfer Learning
- Author
-
Xinqi Fan, Hong Yan, and Ali Raza Shahid
- Subjects
Training set ,Artificial neural network ,Computer science ,business.industry ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Motion (physics) ,Expression (mathematics) ,Task (project management) ,Retargeting ,Computer vision ,Artificial intelligence ,Transfer of learning ,business - Abstract
Facial micro-expression (FME) refers to a brief spontaneous facial movement that can reveal a person's genius emotion. One challenge in facial micro-expression is the lack of data. Fortunately, generative deep neural network models can assist in the creation of desired images. However, the issues for micro-expressions are the facial variations are too subtle to capture, and the limited training data may make feature extraction difficult. To address these issues, we developed a deep motion retargeting and transfer learning based facial micro-expression generation model (DMT-FMEG). First, to capture subtle variations, we employed a deep motion retargeting (DMR) network that can learn keypoints in an unsupervised manner, estimate motions, and generate desired images. Second, to enhance the feature extraction ability, we applied deep transfer learning (DTL) by borrowing knowledge from macro-expression images. We evaluated our method on three datasets, CASME II, SMIC, and SAMM, and found that it showed satisfactory results on all of them. With the effectiveness of the method, we won the second place in the generation task of the FME 2021 challenge.
- Published
- 2021
- Full Text
- View/download PDF
4. Quantized Separable Residual Network for Facial Expression Recognition on FPGA
- Author
-
Yang Li, Hong Yan, Huaizhi Zhang, Mingjie Jiang, and Xinqi Fan
- Subjects
Facial expression ,Speedup ,Edge device ,Computer science ,business.industry ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Inference ,Pattern recognition ,Field (computer science) ,Gate array ,Artificial intelligence ,business ,Field-programmable gate array - Abstract
Facial expression recognition plays an important role in human machine interaction, and thus becomes an important task in cognitive science and artificial intelligence. In vision fields, facial expression recognition aims to identify facial expressions through images or videos, but there is rare work towards real-world applications. In this work, we propose a hardware-friendly quantized separable residual network and developed a real-world facial expression recognition system on a field programming gate array. The proposed network is first trained on devices with graphical processing units, and then quantized to speed up inference. Finally, the quantized algorithm is deployed on a high-performance edge device - Ultra96-V2 field programming gate array board. The complete system involves capturing images, detecting faces, and recognizing expressions. We conduct exhaustive experiments for comparing the performance with various deep learning models and show superior results. The overall system has also demonstrated satisfactory performance on FPGA, and could be considered as an important milestone for facial expression recognition applications in the real world.
- Published
- 2021
- Full Text
- View/download PDF
5. Hybrid Separable Convolutional Inception Residual Network for Human Facial Expression Recognition
- Author
-
Xinqi Fan, Luoxiao Yang, Ali Raza Shahid, Jianfeng Cao, Rizwan Qureshi, and Hong Yan
- Subjects
Facial expression ,Artificial neural network ,business.industry ,Computer science ,Face (geometry) ,Pattern recognition ,Artificial intelligence ,business ,Residual ,Face detection ,Facial recognition system ,Convolutional neural network ,Convolution - Abstract
Facial expression recognition has been applied widely in human-machine interactions, security and business applications. The aim of facial expression recognition is to classify human expressions from their face images. In this work, we propose a novel neural network-based pipeline for facial expression recognition, Hybrid Separable Convolutional Inception Residual Network, using transfer learning with Inception residual network and depth-wise separable convolution. Specifically, our method uses multi-task convolutional neural network for face detection, then modifies the last two blocks of the original Inception residual network using depthwise separable convolution to reduce the computation cost, and finally utilizes transfer learning to take advantages of the transferable weights from a large face recognition dataset. Experimental result on three different databases - the Radboud Faces Database, Compounded Facial Expression of Emotions Database, and Real-word Affective Face Database, shows superior performance compared with the existing studies. Moreover, the proposed method is computationally efficient and reduces the trainable parameters by approximately 25% than the original Inception residual network.
- Published
- 2020
- Full Text
- View/download PDF
6. Pedestrian walking safety system based on smartphone built‐in sensors
- Author
-
Xinqi Fan, Yantao Li, Zehui Qu, Gang Zhou, and Fengtao Xue
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Eye movement ,020207 software engineering ,02 engineering and technology ,Pedestrian ,Accelerometer ,Object detection ,Computer Science Applications ,Preferred walking speed ,Staring ,ComputerSystemsOrganization_MISCELLANEOUS ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Android (operating system) ,business - Abstract
People watching smartphones while walking causes a significant impact to their safety. Pedestrians staring at smartphone screens while walking along the sidewalk are generally more at risk than other pedestrians not engaged in smartphone usage. In this study, the authors propose Safe Walking, an Android smartphone-based system that detects the walking behaviour of pedestrians by leveraging the sensors and front camera on smartphones, improving the safety of pedestrians staring at smartphone screens. More specifically, Safe Walking first exploits a pedestrian speed calculation algorithm by sampling acceleration data via the accelerometer and calculating gravity components via the gravity sensor. Then, this system utilises a greyscale image detection algorithm to detect the face and eye movement modes based on OpenCV4Android to determine if pedestrians are staring at the screens. Finally, Safe Walking generates a vibration by a vibrator on smartphones to alert pedestrians to pay attention to road conditions. The authors implemented Safe Walking on an Android smartphone and evaluated pedestrian walking speed, the accuracy of eye movement, and system performance. The results show that Safe Walking can prevent the potential danger for pedestrians staring at smartphone screens with a true positive rate of 91 % .
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.