摘要

The value of radial base function networks (RBF) has been fully demonstrated and their application in a wide number of scientific fields is undisputed. A fundamental aspect of this tool focusses on the training process, which determines both the efficiency (success or "hit rat" in the subsequent classification) and the overall performance (runtime), since the RBF training phase is the most expensive phase in terms of time. There is abundant literature on studies to improve these aspects, in which all the training techniques proposed are classified either as iterative techniques, with very short execution times for the training process, or as the traditional exact techniques, which excel in their high rates of accuracy in the classification. In our field of study we require the smallest error possible in the classification process, and for this reason, our research opts for exact techniques, while we also work to improve the high latencies in the training process. In a previous study, we proposed a pseudo-exact technique with which we improved the training process by an average of 99.1638177% using RBF-SOM architecture. In the present study we exploit one characteristic of this architecture, namely the possibility of parallelization of the training process. Accordingly, our article proposes a RBF-SOM structure which, thanks to CUDA, parallelizes the training process. This we will denote as CUDA-RBF-SOM architecture.

  • 出版日期2016-9