摘要

In classical feedforward neural networks such as multilayer perceptron, radial basis function network, or counter-propagation network, the neurons in the input layer correspond to features of the training patterns. The number of these features may be large, and their meaningfulness can be various. Therefore, the selection of appropriate input neurons should be regarded. The aim of this paper is to present a complete step-by-step algorithm for determining the significance of particular input neurons of the probabilistic neural network (PNN). It is based on the sensitivity analysis procedure applied to a trained PNN. The proposed algorithm is utilized in the task of reduction of the input layer of the considered network, which is achieved by removing appropriately indicated features from the data set. For comparison purposes, the PNN's input neuron significance is established by using the ReliefF and variable importance procedures that provide the relevance of the input features in the data set. The performance of the reduced PNN is verified against a full structure network in classification problems using real benchmark data sets from an available machine learning repository. The achieved results are also referred to the ones attained by entropy-based algorithms. The prediction ability expressed in terms of misclassifications is obtained by means of a 10-fold cross-validation procedure. Received outcomes point out interesting properties of the proposed algorithm. It is shown that the efficiency determined by all tested reduction methods is comparable.

  • 出版日期2018-8