Trading Performance, Power, and Area on Low-Precision Posit MAC Units for CNN Training

2023 IEEE 35TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING, SBAC-PAD(2023)

引用 0|浏览3
暂无评分
摘要
The recently proposed Posit number system has been regarded as a particularly well-suited floating-point format to optimize the throughput and efficiency of low-precision computations in convolutional neural network (CNN) applications. In particular, the Posit format offers a balance between decimal accuracy and dynamic range, which results in a distribution of values that seems particularly interesting for deep learning applications. However, the adoption of the Posit still raises some concerns regarding hardware complexity, particularly when accounting for the overheads associated with the quire exact accumulator. Accordingly, this paper presents a holistic study on the model accuracy, performance, power, and area trade-offs when adopting low-precision Posit multiply-accumulate (MAC) units for the training of CNNs. In particular, 28nm ASIC implementations of a reference Posit MAC unit architecture demonstrate that the quire accounts for over 70% of the area and power utilization, and the obtained CNN training results showed that its use is only strictly required when considering mixed low-precision configurations. As a result, reducing the size of the quire results in an average reduction of area and power by 57% and 47%, without imposing visible training accuracy losses.
更多
查看译文
关键词
Posit Number System,Quire Structure,Low-precision Arithmetic,Convolutional Neural Networks,Deep Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要