1. Efficient Unsupervised Dimension Reduction for Streaming Multiview Data
- Author
-
Haikun Wei, Liping Xie, Dacheng Tao, Weili Guo, and Yuan Yan Tang
- Subjects
Computer science ,business.industry ,Iterative method ,Dimensionality reduction ,010401 analytical chemistry ,02 engineering and technology ,021001 nanoscience & nanotechnology ,Machine learning ,computer.software_genre ,01 natural sciences ,0104 chemical sciences ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Learning ,Artificial intelligence ,Electrical and Electronic Engineering ,0210 nano-technology ,Representation (mathematics) ,business ,computer ,Algorithms ,Software ,Information Systems - Abstract
Multiview learning has received substantial attention over the past decade due to its powerful capacity in integrating various types of information. Conventional unsupervised multiview dimension reduction (UMDR) methods are usually conducted in an offline manner and may fail in many real-world applications, where data arrive sequentially and the data distribution changes periodically. Moreover, satisfying the requirements of high memory consumption and expensive retraining of the time cost in large-scale scenarios are difficult. To remedy these drawbacks, we propose an online UMDR (OUMDR) framework. OUMDR aims to seek a low-dimensional and informative consensus representation for streaming multiview data. View-specific weights are also learned in this article to reflect the contributions of different views to the final consensus presentation. A specific model called OUMDR-E is developed by introducing the exclusive group LASSO (EG-LASSO) to explore the intraview and interview correlations. Then, we develop an efficient iterative algorithm with limited memory and time cost requirements for optimization, where the convergence of each update is theoretically guaranteed. We evaluate the proposed approach in video-based expression recognition applications. The experimental results demonstrate the superiority of our approach in terms of both effectiveness and efficiency.
- Published
- 2022
- Full Text
- View/download PDF