Back to Search Start Over

Two-Stage Spatial Mapping for Multimodal Data Fusion in Mobile Crowd Sensing

Authors :
Jiancun Zhou
Tao Xu
Sheng Ren
Kehua Guo
Source :
IEEE Access, Vol 8, Pp 96727-96737 (2020)
Publication Year :
2020
Publisher :
IEEE, 2020.

Abstract

Human-driven Edge Computing (HEC) integrates the elements of humans, devices, Internet and information, and mobile crowd sensing become an important means of data collection. In HEC, the data collected from large-scale sensing usually includes a variety of modalities. These different modality data contain unique information and attributes, which can be complementary. Combining data from many different modalities will get more information. However, current deep learning is usually only for bimodal data. In order for artificial intelligence to make further breakthroughs in understanding our real world, it needs to be able to process data in different modalities together. The key step is to be able to map these different modalities data into the same space. In order to process multimodal data better, we propose a fusion and classification method for multimodal data. First, a multimodal data space is constructed, and data of different modalities are mapped into the multimodal data space to obtain a unified representation of different modalities data. Then, through bilinear pooling, the representations of different modality are fused, and the fused vectors are used in the classification task. Through the experimental verification on the multi-modal data set, it proves that the multi-modal fusion representation is effective, and the classification effect is more accurate than the single-modal data.

Details

Language :
English
ISSN :
21693536
Volume :
8
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.1800cd82f4024d0a832f5b086d2eb42b
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2020.2995268