Serving Multi-DNN Workloads on FPGAs: A Coordinated Architecture, Scheduling, and Mapping Perspective

IEEE Transactions on Computers(2023)

引用 1|浏览36
暂无评分
摘要
Deep Neural Network (DNN) INFerence-as-a-Service (INFaaS) is the dominating workload in current data centers, for which FPGAs become promising hardware platforms because of their high flexibility and energy efficiency. The dynamic and multi-tenancy nature of INFaaS requires careful design in three aspects: multi-tenant architecture, multi-DNN scheduling, and multi-core mapping. These three factors are critical to the system latency and energy efficiency but are also challenging to optimize since they are tightly coupled and correlated. This paper proposes H3M, an automatic Design Space Exploration (DSE) framework to jointly optimize the architecture, scheduling, and mapping for serving INFaaS on cloud FPGAs. H3M explores: (1) the architecture design space with Heterogeneous spatial Multi-tenant sub-accelerators, (2) layer-wise scheduling for Heterogeneous Multi-DNN workloads, and (3) single-layer mapping to the Homogeneous Multi-core architecture. H3M beats state-of-the-art multi-tenant DNN accelerators, Planaria and Herald, by up to 7.5x and 3.6x in Energy-Delay-Product (EDP) reduction on the ASIC platform. On the Xilinx U200 and U280 FPGA platforms, H3M offers 2.1-5.7x and 1.8-9.0x EDP reduction over Herald.
更多
查看译文
关键词
Computer architecture,Field programmable gate arrays,Dynamic scheduling,Optimization,Hardware,Bandwidth,Parallel processing,Multi-tenancy,deep neural network,multi-core,accelerator,FPGA
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要