NV-DNN: Towards Fault-Tolerant DNN Systems with N-Version Programming

2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)(2019)

引用 18|浏览28
暂无评分
摘要
Employing deep learning algorithms in real-world applications becomes a trend. However, a bottleneck that impedes their further adoption in safety-critical systems is the reliability issue. It is challenging to develop reliable neural network models as the theory of deep learning has not yet been well-established and neural network models are very sensitive to data perturbations. Inspired by the classic paradigm of N-version programming for fault tolerance, this paper investigates the feasibility of developing fault-tolerant deep learning systems through model redundancy. We hypothesize that if we train several simplex models independently, these models are unlikely to produce erroneous results for the same test cases. In this way, we can design a fault-tolerant system whose output is determined by all these models cooperatively. We propose several independence factors that can be introduced for generating multiple versions of neural network models, including training, network, and data. Experimental results on MNIST and CIFAR-10 both verify that our approach can improve the fault-tolerant ability of a deep learning system. Particularly, independent data for training plays the most significant role in generating multiple models sharing the least mutual faults.
更多
查看译文
关键词
deep learning,fault tolerance,NV DNN
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要