摘要

This paper is concerned with the problem of distributed multitask learning over networks, which aims to simultaneously infer multiple node-specific parameter vectors in a collaborative manner. Most of the existing works on the distributed multitask problem modeled the task relatedness by assuming some similarities of parameter vectors in an explicit way. In this paper, we implicitly model the similarity of parameter vectors by assuming that the parameter vectors share a common low-dimensional predictive structure on hypothesis spaces, which is learned using the available data in networks. A distributed structure learning algorithm for the in-network cooperative estimation problem is then derived based on the block coordinate descent method integrated with the inexact alternating direction method of multipliers technique. Simulations on both synthetic and real-world datasets are given to verify the effectiveness of the proposed algorithm. In the case that each node shares a common predictive subspace, it is demonstrated that the proposed multitask algorithm outperforms the noncooperative learning algorithm. Moreover, the use of the inexact approach can significantly reduce the communication bandwidth and still provide the same optimal solution as the corresponding centralized approach.