Re-configurable parallel feed-forward neural network implementation using FPGA

Mohamed El-Sharkawy, Miran Wael,Maggie Mashaly,Eman Azab

Integration(2024)

引用 0|浏览0
暂无评分
摘要
This paper proposes a novel hardware architecture for a Feed-Forward Neural Network (FFNN) with the objective of minimizing the number of execution clock cycles needed for the network’s computation. The proposed architecture depends mainly on using two physical layers that are multiplexed and reused during the computation of the FFNN to achieve an efficient parallel design. Two physical layers are designed and reused to handle the computation of different sizes of Neural Networks (NN). The proposed FFNN architecture hardware resources are independent of the NN’s number of layers, instead, they depend only on the number of neurons in the largest layer. This versatile architecture serves as an accelerator in Deep Neural Network (DNN) computations. The proposed design exploits parallelism by making the two physical layers work in parallel through the computations. The proposed implementation uses an 18-bit fixed point. The architecture reaches 200 MHz clock speed on a Spartan7 FPGA. Furthermore, the proposed architecture achieves a lower neuron computation factor compared to previous works in the literature.
更多
查看译文
关键词
Artificial neural network,Deep neural network,Feed-forward neural network,FPGA,Hardware parallelism,Machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要