谷歌浏览器插件
订阅小程序
在清言上使用

Improving GPU Throughput through Parallel Execution Using Tensor Cores and CUDA Cores

2022 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)(2022)

引用 0|浏览23
暂无评分
摘要
To accelerate the execution of Machine Learning applications, recent GPUs use Tensor cores to speed up the general matrix multiplication (GEMM), which is the heart of deep learning. The Streaming Processors in such GPUs also contain CUDA cores to implement general computations. While the Tensor cores can significantly improve the performance of GEMM, the CUDA cores remain idle when Tensor cores are running. This leads to inefficient resource utilization. In this work, we propose to offload part of the GEMM operations from Tensor cores to CUDA cores to fully utilize GPU resources. We investigated the performance bottleneck in such offloading schemes and proposed architectural optimization to maximize the GPU throughput. Our technique is purely hardware-based and does not require a new compiler or other software support. Our evaluation results show that the proposed scheme can improve performance by 19% at the maximum.
更多
查看译文
关键词
Accelerator,GPU,Machine Learning,Tensor core,GEMM,throughput,parallel scheduling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要