摘要

In a conventional SOM, it is of utmost importance that a certain and consistently decreasing learning rate function be chosen. Decrease the learning rate too fast, the map will not get converged and the performance of the SOM may take a steep fall, and if too slow, the procedure would take a large amount of time to get carried out. For overcoming this problem, we have hereafter proposed a constant learning rate self-organizing map (CLRSOM) learning algorithm, which uses a constant learning rate. So this model intelligently chooses both the nearest and the farthest neuron from the Best Matching Unit (BMU). Despite a constant rate of learning being chosen, this SOM has still provided a far better result. The CLRSOM is applied to various standard input datasets and a substantial improvement is reported in the leaming performance using three standard parameters as compared to the conventional SOM and Rival Penalized SOM (RPSOM). The mapping preserves topology of input data without sacrificing desirable quantization error and neuron utilization levels.

  • 出版日期2015-3