摘要

We consider the multi-agent optimization problem where multiple agents try to cooperatively optimize the sum of their local convex objective functions, subject to global inequality constraints and a convex constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the associated Lagrangian function, which can be evaluated with stochastic errors, we propose the distributed primal-dual stochastic subgradient algorithms for two cases: (i) the time model is synchronous and (ii) the time model is asynchronous. In the first case, we obtain bounds on the convergence properties of the algorithm for a diminishing step size. In the second case, for a constant step size, we establish some error bounds on the algorithm's performance. In particular, we prove that the error bounds scale as n root n in the number of n agents.

全文