Back to Search Start Over

Efficient and Differentiable Low-Rank Matrix Completion With Back Propagation.

Authors :
Chen, Zhaoliang
Yao, Jie
Xiao, Guobao
Wang, Shiping
Source :
IEEE Transactions on Multimedia; 2023, Vol. 25, p228-242, 15p
Publication Year :
2023

Abstract

The low-rank matrix completion has gained rapidly increasing attention from researchers in recent years for its efficient recovery of the matrix in various fields. Numerous studies have exploited the popular neural networks to yield low-rank outputs under the framework of low-rank matrix factorization. However, due to the discontinuity and nonconvexity of rank function, it is difficult to directly optimize the rank function via back propagation. Although a large number of studies have attempted to find relaxations of the rank function, e.g., Schatten- $p$ norm, they still face the following issues when updating parameters via back propagation: 1) These methods or surrogate functions are still non-differentiable, bringing obstacles to deriving the gradients of trainable variables. 2) Most of these surrogate functions perform singular value decomposition upon the original matrix at each iteration, which is time-consuming and blocks the propagation of gradients. To address these problems, in this paper, we develop an efficient block-wise model dubbed differentiable low-rank learning (DLRL) framework that adopts back propagation to optimize the Multi-Schatten- $p$ norm Surrogate (MSS) function. Distinct from the original optimization of this surrogate function, the proposed framework avoids singular value decomposition to admit the gradient propagation and builds a block-wise learning scheme to minimize values of Schatten- $p$ norms. Accordingly, it speeds up the computation and makes all parameters in the proposed framework learnable according to a predefined loss function. Finally, we conduct substantial experiments in terms of image recovery and collaborative filtering. The experimental results verify the superiority of the proposed framework in terms of both runtimes and learning performance compared with other state-of-the-art low-rank optimization methods. Our codes are available at https://github.com/chenzl23/DLRL. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15209210
Volume :
25
Database :
Complementary Index
Journal :
IEEE Transactions on Multimedia
Publication Type :
Academic Journal
Accession number :
161621438
Full Text :
https://doi.org/10.1109/TMM.2021.3124087