摘要

Given a limited number of entries from the superposition of a low-rankmatrix plus the product of a known compression matrix times a sparse matrix, recovery of the low-rank and sparse components is a fundamental task subsuming compressed sensing, matrix completion, and principal components pursuit. This paper develops algorithms for decentralized sparsity-regularized rank minimization over networks, when the nuclear-and l(1)-norm are used as surrogates to the rank and nonzero entry counts of the sought matrices, respectively. While nuclear-norm minimization has well-documented merits when centralized processing is viable, non-separability of the singular-value sum challenges its decentralized minimization. To overcome this limitation, leveraging an alternative characterization of the nuclear norm yields a separable, yet non-convex cost minimized via the alternating-direction method of multipliers. Interestingly, if the decentralized (non-convex) estimator converges, under certain conditions it provably attains the global optimum of its centralized counterpart. As a result, this paper bridges the performance gap between centralized and in-network decentralized, sparsity-regularized rank minimization. This, in turn, facilitates (stable) recovery of the low rank and sparse model matrices through reduced-complexity per-node computations, and affordable message passing among single-hop neighbors. Several application domains are outlined to highlight the generality and impact of the proposed framework. These include unveiling traffic anomalies in backbone networks, and predicting networkwide path latencies. Simulations with synthetic and real network data confirm the convergence of the novel decentralized algorithm, and its centralized performance guarantees.

  • 出版日期2013-11