How to exploit sparsity in RNNs on event-driven architectures.

SCOPES(2021)

引用 0|浏览1
暂无评分
摘要
Event-driven architectures have been shown to provide low-power, low-latency artificial neural network (ANN) inference. This is especially beneficial on Edge devices, particularly when combined with sparse execution. Recurrent neural networks (RNNs) are ANNs that emulate memory. Their recurrent connection enables the reuse of previous output for the generation of new output. However, when trying to use RNNs in a sparse context on event-driven architectures, novel challenges in synchronization and the usage of sparse data are encountered. In this work, these challenges are systematically analyzed, and mechanisms to overcome them are proposed. Experimental results of a monocular depth estimation use case on the NeuronFlow architecture show that sparsity in RNNs can be exploited effectively on event-driven architectures.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要