TMac: Tensor completion by parallel Matrix factorization

Yangyang Xu, Ruru Hao, Wotao Yin, and Zhixun Su


Higher-order low-rank tensors with missing values naturally arise in many applications including hyperspectral data recovery, video inpainting, seismic data reconstruction, and so on. These problems can be formulated as low-rank tensor completion (LRTC). Existing methods^{[1, 2]} for LRTC employ matrix nuclear-norm minimization and use the singular value decomposition (SVD) in their algorithms, which become very slow or even not applicable for large-scale problems. To tackle this difficulty, we apply low-rank matrix factorization to each mode unfolding of the tensor in order to enforce low-rankness and update the matrix factors alternatively, which is computationally much cheaper than SVD.

Our method

We aim at recovering a low-rank tensor mathbf{mathcal{M}}inmathbf{R}^{I_1timesldotstimes I_N} from partial observations mathbf{mathcal{B}}=mathcal{P}_Omega(mathbf{mathcal{M}}), where Omega is the index set of observed entries, and mathcal{P}_Omega keeps the entries in Omega and zeros out others. We apply low-rank matrix factorization to each mode unfolding of mathbf{mathcal{M}} by finding matrices mathbf{X}_ninmathbf{R}^{I_ntimes r_n},mathbf{Y}_nin mathbf{R}^{r_ntimesPi_{jneq n}I_j} such that mathbf{M}_{(n)}approxmathbf{X}_nmathbf{Y}_n for n=1,ldots,N, where r_n is the estimated rank, either fixed or adaptively updated. Introducing one common variable mathbf{mathcal{Z}} to relate these matrix factorizations, we solve the following model

mathop{mathrm{minimize}}_{mathbf{X},mathbf{Y},mathbf{mathcal{Z}}}sum_{n=1}^Nfrac{alpha_n}{2}|mathbf{X}_nmathbf{Y}_n-mathbf{Z}_{(n)}|_F^2,mathrm{s.t.} mathcal{P}_Omega(mathbf{mathcal{Z}})=mathbf{mathcal{B}},

where mathbf{X}=(mathbf{X}_1,ldots,mathbf{X}_N) and mathbf{Y}=(mathbf{Y}_1,ldots,mathbf{Y}_N). In the model, alpha_n, n=1,ldots,N, are weights and satisfy sum_nalpha_n=1. We use alternating least squares method to solve the model.


Our model is non-convex jointly with respect to mathbf{X},mathbf{Y} and mathcal{Z}. Although a global solution is not guaranteed, we demonstrate by numerical experiments that our algorithm can reliably recover a wide variety of low-rank tensors, such as the following phase transition plots. In the picture, each target tensor mathcal{M}=mathcal{C}times_1mathbf{A}_1times_2mathbf{A}_2times_3mathbf{A}_3, where mathcal{C}inmathbf{R}^{rtimes rtimes r} and mathbf{A}_ninmathbf{R}^{50times r}, forall n have Gaussian random entries. (a) FaLRTC: the tensor completion method in [2]. (b) MatComp: first reshape the tensor as a matrix and then use the matrix completion solver LMaFit in [3]. (c) TMac-fix: our method with alpha_n=frac{1}{3} and r_n fixed to r, forall n. (d) TMac-inc: our method with alpha_n=frac{1}{3} and using rank-increasing strategy starting from r_n = mathrm{round}(0.75r),forall n. (e) TMac-dec: our method withalpha_n=frac{1}{3} and using rank-decreasing strategy starting from r_n = mathrm{round}(1.25r),forall n.

The results show that our method performs much better than the other two methods.


Matlab code

Download the code


Y. Xu, R. Hao, W. Yin, and Z. Su. Parallel matrix factorization for low-rank tensor completion, Inverse Problems and Imaging, 9(2015), 601–624.


[1] S. Gandy, B. Recht, and I. Yamada, Tensor completion and low-n-rank tensor recovery via convex optimization, Inverse Problems, 27(2011), p. 025010.

[2] J. Liu, P. Musialski, P. Wonka, and Jieping Ye, Tensor completion for estimating missing values in visual data, IEEE Transactions on Pattern Analysis and Machine Intelligence, (2013), pp. 208-220.

[3] Z. Wen, W. Yin, and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Mathematical Programming Computation, (2012), pp. 1-29