摘要

The storage model of column-oriented databases is similar in structure to densely packed matrices/vectors found in many high-performance computing applications. Hence, hardware-accelerated vectorized matrix operations using Reconfigurable Logic (RL) coprocessors may find parallels in hardware acceleration of databases. In this article, we explore this hypothesis by proposing a multicontext, coarse-grained Reconfigurable coprocessor Unit (RU) model that is used to accelerate some of the database operations in hardware for column-oriented databases. We then describe the implementation of hardware algorithms for the equi-join, nonequi-join, and inverse-lookup database operations. Finally, we evaluate these algorithms using a microbenchmark query. Our results indicate that the query execution on the proposed RU model is one to two orders of magnitude faster than the software-only query execution.

  • 出版日期2011-5

全文