Back to Search Start Over

Low-Tubal-Rank Tensor Completion Using Alternating Minimization.

Authors :
Liu, Xiao-Yang
Aeron, Shuchin
Aggarwal, Vaneet
Wang, Xiaodong
Source :
IEEE Transactions on Information Theory. Mar2020, Vol. 66 Issue 3, p1714-1737. 24p.
Publication Year :
2020

Abstract

The low-tubal-rank tensor model has been recently proposed for real-world multidimensional data. In this paper, we study the low-tubal-rank tensor completion problem, i.e., to recover a third-order tensor by observing a subset of its elements selected uniformly at random. We propose a fast iterative algorithm, called Tubal-AltMin, that is inspired by a similar approach for low-rank matrix completion. The unknown low-tubal-rank tensor is represented as the product of two much smaller tensors with the low-tubal-rank property being automatically incorporated, and Tubal-AltMin alternates between estimating those two tensors using tensor least squares minimization. First, we note that tensor least squares minimization is different from its matrix counterpart and nontrivial as the circular convolution operator of the low-tubal-rank tensor model is intertwined with the sub-sampling operator. Secondly, the theoretical performance guarantee is challenging since Tubal-AltMin is iterative and nonconvex. We prove that 1) Tubal-AltMin generates a best rank- $r$ approximate up to any predefined accuracy $\epsilon $ at an exponential rate, and 2) for an $n \times n \times k$ tensor $\mathcal {M}$ with tubal-rank $r \ll n$ , the required sampling complexity is $O((nr^{2}k ||\mathcal {M}||_{F}^{2} \log ^{3}~n) / \overline {\sigma }_{rk}^{2})$ , where $\overline {\sigma }_{rk}$ is the $rk$ -th singular value of the block diagonal matrix representation of $\mathcal {M}$ in the frequency domain, and the computational complexity is $O(n^{2}r^{2}k^{3} \log n \log (n/\epsilon))$. Finally, on both synthetic data and real-world video data, evaluation results show that compared with tensor-nuclear norm minimization using alternating direction method of multipliers (TNN-ADMM), Tubal-AltMin-Simple (a simplified implementation of Tubal-AltMin) improves the recovery error by several orders of magnitude. In experiments, Tubal-AltMin-Simple is faster than TNN-ADMM by a factor of 5 for a $200 \times 200 \times 20$ tensor. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189448
Volume :
66
Issue :
3
Database :
Academic Search Index
Journal :
IEEE Transactions on Information Theory
Publication Type :
Academic Journal
Accession number :
143313080
Full Text :
https://doi.org/10.1109/TIT.2019.2959980