摘要

Artificial neural networks (ANNs) are often trained using gradient descent algorithms (such as backpropagation). An important problem in the learning process is the slowdown incurred by temporary minima (TM) We analyze this problem for an ANN trained to solve the Exclusive Or problem The network is transformed into the equivalent all permutations fuzzy rule-base (FARB), which provides a symbolic representation of the knowledge embedded in the network, after each learning step We develop a mathematical model for the evolution of the fuzzy rule-base parameters during learning in the vicinity of TM. We show that the rule-base becomes singular and tends to remain singular in the vicinity of TM. The analysis of the fuzzy rule-base suggests a simple remedy for overcoming the slowdown in the learning process incurred by TM This is based on slightly perturbing the desired output values in the training examples. so that they are no longer symmetric. Simulations demonstrate the effectiveness of this approach in reducing the time spent in the vicinity of TM.

  • 出版日期2010-10-1