摘要

Conjugate gradient methods have advantages, such as fast convergence and low memory requirement, which are important for many real-life applications. For zero-order Takagi-Sugeno inference systems, a Polak-Ribière-Polyak conjugate gradient-based algorithm is proposed to train a neuro-fuzzy network. Compared with the existing gradient-based training algorithm, this scheme efficiently enhances the learning performance. Two deterministic convergence results are proved in detail, which indicate that the gradient of the objective function approaches zero (weak convergence) and the sequence of system parameters tends to a fixed point (strong convergence), respectively. To demonstrate the effectiveness of the proposed algorithm, as well as, to validate the theoretical results, numerical simulations are provided for both function approximation and classification type systems.