摘要

In incremental learning techniques, learning occurs continuously over time and does not cease once available data have been exhausted. Such techniques are useful in cases where problem data may be acquired in small quantities over time. This paper presents an incremental neural network called the evolving Probabilistic Neural Network. The main advantage of this technique lies in its adaptive architecture, which adjusts to data distributions. This method requires that each training sample be used only once throughout the training phase without being reprocessed. The technique is flexible and offers a simplified structure while maintaining performance levels comparable to those of other techniques. Experiments were conducted using publicly available benchmark data sets. These experiments show that overall, the proposed model achieves a quality of response that is comparable to those of the best techniques evaluated, and its structure size and classification time were as low as those of less complex techniques. These results indicate that the proposed model achieves a satisfactory balance between efficiency and efficacy.

  • 出版日期2015-2-10