摘要

We consider distributed optimization problems in which a number of agents are to seek the global optimum of a sum of cost functions through only local information sharing. In this paper, we are particularly interested in scenarios, where agents are operating asynchronously over stochastic networks subject to random failures. Most existing algorithms require coordinated and decaying stepsizes to ensure zero gap between the estimated value of each agent and the exact optimum, restricting it from asynchronous implementation and resulting in slower convergence results. To deal with this issue, we develop a new asynchronous distributed gradient method (Asyn-DGM) based on consensus theory. The proposed algorithm not only allows for asynchronous implementation in a completely distributed manner but also, most importantly, is able to seek the exact optimum even with constant stepsizes. We will show that the assumption of boundedness of gradients, which is widely used in the literature, can be dropped by instead imposing the standard Lipschitz continuity condition on gradients. Moreover, we derive an upper bound of stepsize within which the proposed AsynDGM can achieve a linear convergence rate for strongly convex functions with Lipschitz gradients. A canonical example of sensor fusion problems is provided to illustrate the effectiveness of the proposed algorithm.