摘要

Scientific computing on grid infrastructures has historically focused on processing vast workloads of independent single-core CPU jobs. Limitations of this approach, however, have motivated a shift towards parallel computing using message passing, multi-core CPUs and computational accelerators, including GPGPUs in particular. Application support for the use of GPGPUs in existing grid infrastructures is still lacking. A model is proposed for the orchestration of GPGPU-enabled applications based on commonly used frameworks such as CUDA and OpenCL. The model makes use of recent advances in remote GPGPU virtualisation, making it possible for an application to access GPGPUs installed on remote hosts. Each physical GPGPU is isolated, creating a pool of virtual GPGPUs that can be allocated independently to jobs. A proof-of-concept Grid supporting virtual GPGPUs has been implemented and tested. It will be shown that users can be provided with a simple yet flexible and powerful mechanism for specifying GPGPU requirements. Furthermore, vGPGPU provision can be fully integrated with existing grid middleware and services. Performance results suggest that improved resource utilisation can compensate for the overhead of remote GPGPU access.

  • 出版日期2015

全文