SME: A Systolic Multiply-accumulate Engine for MLP-based Neural Network

2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)(2022)

Cited 0|Views2
No score
Abstract
In this paper, we propose an output stationary systolic multiply-accumulate engine (SME) with an optimized dataflow for multilayer perceptron (MLP) computation in the state-of-the-art Neural Radiance Field (NeRF) algorithms. We also analyze activation patterns of the NeRF algorithm which uses ReLU as the activation function, and find that the activation can be sparse, especially in the last several layers. We therefore further take advantage of activation sparsity by gating corresponding multiplications in the SME for power saving. The proposed SME is implemented using SpinalHDL, which is translated to VerilogHDL for VLSI implementation based on 40nm CMOS technology. Evaluation results show that, working at 400MHz, the proposed SME occupies 31.371mm 2 circuit area, and consumes 873.7mW power, translating 12,708.10 ksamples/J and 360.06 ksamples/s/mm 2 .
More
Translated text
Key words
Systolic Array,Hardware Acceleration,Multi-layer Perceptron (MLP),Neural Radiance Field (NeRF)
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined