摘要

The practical application of hydrological uncertainty models that are designed to generate multiple ensembles can be severely restricted by the available computer processing power and thus, the time taken to generate the results. CPU clusters can help in this regard, but are often costly to use continuously and maintain, causing scientists to look elsewhere for speed improvements. The use of powerful graphics processing units (GPUs) for application acceleration has become a recent trend, owing to their low cost per FLOP, and their highly parallel and throughput-oriented architecture, which makes them ideal for many scientific applications. However, programming these devices efficiently is non-trivial, seemingly making their use impractical for many researchers. In this study, we investigate whether redesigning the CPU code of an adapted Pitman rainfall-runoff uncertainty model is necessary to obtain a satisfactory speedup on GPU devices. A twelvefold speedup over a multithreaded CPU implementation was achieved by using a modern GPU with minimal changes to the model code. This success leads us to believe that redesigning code for the GPU is not always necessary to obtain a worthwhile speedup.

  • 出版日期2014-1