Fast Sparse Deep Neural Networks: Theory And Performance Analysis

IEEE ACCESS(2019)

引用 4|浏览3
暂无评分
摘要
In this paper, fast sparse deep neural networks that aim to offer an alternative way of learning in a deep structure are proposed. We examine some optimization algorithms for traditional deep neural networks and find that deep neural networks suffer from a time-consuming training process because of a large number of connecting parameters in layers and layers. To reduce time consumption, we propose fast sparse deep neural networks, which mainly consider the following two aspects in the design of the network. One is that the parameter learning at each hidden layer is given utilizing closed-form solutions, which is different from the BP algorithm with iterative updating strategy. Another aspect is that fast sparse deep neural networks use the summation method of a multi-layer linear approximation to estimate the output target, which is a different way from most deep neural network models. Unlike the traditional deep neural networks, fast sparse deep neural networks can achieve excellent generalization performance without fine-tuning. In addition, it is worth noting that fast sparse deep neural networks can also effectively overcome the shortcomings of the extreme learning machine and hierarchical extreme learning machine. Compared to the existing deep neural networks, enough experimental results on benchmark datasets demonstrate that the proposed model and optimization algorithms are feasible and efficient.
更多
查看译文
关键词
Sparse representation, extreme learning machine, deep neural networks, convex approximation, fast sparse deep neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要