摘要

This paper investigates the distributed cooperative learning (DCL) problems over networks, where each node only has access to its own data generated by the unknown pattern (map or function) uniformly, and all nodes cooperatively learn the pattern by exchanging local information with their neighboring nodes. These problems cannot be solved by using traditional centralized algorithms. To solve these problems, two novel DCL algorithms using wavelet neural networks are proposed, including continuous-time DCL (CT-DCL) algorithm and discrete-time DCL (DT-DCL) algorithm. Combining the characteristics of neural networks with the properties of the wavelet approximation, the wavelet series are used to approximate the unknown pattern. The DCL algorithms are used to train the optimal weight coefficient matrix of wavelet series. Moreover, the convergence of the proposed algorithms is guaranteed by using the Lyapunov method. Compared with existing distributed optimization strategies such as distributed average consensus (DAC) and alternating direction method of multipliers (ADMM), our DT-DCL algorithm requires less information communications and training time than ADMM strategy. In addition, it achieves higher accuracy than DAC strategy when the network consists of large amounts of nodes. Moreover, the proposed CT-DCL algorithm using a proper step size is more accurate than the DT-DCL algorithm if the training time is not considered. Several illustrative examples are presented to show the efficiencies and advantages of the proposed algorithms.