Deep Learning with Limited Numerical Precision
ICML'15: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37(2015)
摘要
Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络