摘要
Pattern matching with high computational cost is often realized in hardware by dedicated implementations, having usually low flexibility for satisfying different target applications. The conventional single-instruction-multiple-data (SIMD) solution, which sufficiently exploits the massive intrinsic parallelism, is therefore attracting increasing attention for artificial neural networks. In this paper, as an alternative to SIMD, we propose a memory-based reconfigurable architecture for implementing the self-organizing-map (SOM) neural network model and its supervised variant named learning vector quantization (LVQ). A reconfigurable complete-binary-Adder-Tree (RCBAT) architecture, which allows multiple operation modes through self-managed dynamic configuration, attains good area/power efficiency due to the reusage of arithmetic units. The implemented pipelined parallelism leads to higher throughput and processing speed. Furthermore, high flexibility in feature-vector dimensionality and number is enabled by a partial vector-component storage (PVCS) concept. The fabricated prototype chips in 65 and 180 nm CMOS technology achieve 51.2 Gbits/s and 9.6 Gbits/s throughput, respectively. The experimental results verify fast training and recognition speed, low power dissipation, and large flexibility for different applications.
- 出版日期2016-10