A Compiler for Deep Neural Network Accelerators to Generate Optimized Code for a Wide Range of Data Parameters from a Hand-crafted Computation Kernel

2019 IEEE Symposium in Low-Power and High-Speed Chips (COOL CHIPS)(2019)

引用 5|浏览101
暂无评分
摘要
This paper presents the design and implementation of a compiler for a deep neural network accelerator that provides high performance and energy efficiency. The compiler allows deep learning frameworks, such as TensorFlow, to exploit the accelerator hardware by automatically creating data transfer code and outer loops around highly-tuned hand-crafted inner-loops for a wide range of neural network parameters. In other words, our compiler significantly reduces the development effort for deep learning libraries without sacrificing their performance. We have evaluated our prototype compiler to show that it can generate code for five most-critical deep learning operators with a comparative performance obtained from hand-tuned code.
更多
查看译文
关键词
Kernel,Deep learning,Data transfer,Hardware,Monte Carlo methods,Energy efficiency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要