Gpushare: Fair-Sharing Middleware For Gpu Clouds

2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)(2016)

Cited 13|Views129
No score
Abstract
Many new cloud-focused applications such as deep learning and graph analytics have started to rely on the high computing throughput of GPUs, but cloud providers cannot currently support fine-grained time-sharing on GPUs to enable multi-tenancy for these types of applications. Currently, scheduling is performed by the GPU driver in combination with a hardware thread dispatcher to maximize utilization. However, when multiple applications with contrasting kernel running times and high-utilization of the GPU need to be co-located, this approach unduly favors one or more of the applications at the expense of others.This paper presents GPUShare, a middleware solution for GPU fair sharing among high-utilization, long-running applications. It begins by analyzing the scenarios under which the current driver-based multi-process scheduling fails, noting that such scenarios are quite common. It then describes a software-based mechanism that can yield a kernel before all of its threads have run, thus giving finer control over the time slice for which the GPU is allocated to a process. In controlling time slices on the GPU by yielding kernels, GPUShare improves fair GPU sharing across tenants and outperforms the CUDA driver by up to 45% for two tenants and by up to 89% for more than two tenants, while incurring a maximum overhead of only 12%. Additional improvements are obtained from having a central scheduler that further smooths out disparities across tenants' GPU shares improving fair sharing by up to 92% for two tenants and by up to 76% for more than two tenants.
More
Translated text
Key words
GPU,multi-tenancy,middleware,instrumentation,yielding
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined