Full-BNN: A Low Storage and Power Consumption Time-Domain Architecture based on FPGA

2022 IEEE 33rd International Conference on Application-specific Systems, Architectures and Processors (ASAP)(2022)

引用 1|浏览1
暂无评分
摘要
With the increasing demand for low power and storage consumption in mobile platforms, wearable devices, and Internet of Things devices, how to better apply lightweight neural networks in many edge computing scenarios and resource-limited settings is still facing challenges. This paper first proposes a novel binary convolution structure based on the time-domain to reduce resource and power consumption for the convolution process. Furthermore, through the joint design of convolution, batch normalization, and activation function in the time-domain, we propose a full-BNN model and hardware architecture, which keeps the values of all intermediate results as one bit to reduce storage requirements by 75%. Then, we optimize the above design with spatial and temporal parallelism to improve the overall computing efficiency. Finally, we built an accelerating system and take the MNIST data set as an example to test the optimized architecture on the DSP + FPGA platform. The results show that the model can be used as a neural network acceleration unit with low storage requirement and low power consumption for classification tasks with a small loss of accuracy. The joint design method in the time-domain may further inspire other computing architectures.
更多
查看译文
关键词
Full-BNN,time-domain,low storage,low power consumption,FPGAs
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要