摘要

This paper proposes distributed discrete-time algorithms to cooperatively solve an additive cost optimization problem in multiagent networks. The striking feature lies in the use of only the sign of relative state information between neighbors, which substantially differentiates our algorithms from others in the existing literature. We first interpret the proposed algorithms in terms of the penalty method in optimization theory and then perform nonasymptotic analysis to study convergence for static network graphs. Compared with the celebrated distributed subgradient algorithms, which, however, use the exact relative state information, the convergence speed is essentially not affected by the loss of information. We also study how introducing noise into the relative state information and randomly activated graphs affect the performance of our algorithms. Finally, we validate the theoretical results on a class of distributed quantile regression problems.