Scale-Dropout: Estimating Uncertainty in Deep Neural Networks Using Stochastic Scale
CoRR(2023)
Abstract
Uncertainty estimation in Neural Networks (NNs) is vital in improving
reliability and confidence in predictions, particularly in safety-critical
applications. Bayesian Neural Networks (BayNNs) with Dropout as an
approximation offer a systematic approach to quantifying uncertainty, but they
inherently suffer from high hardware overhead in terms of power, memory, and
computation. Thus, the applicability of BayNNs to edge devices with limited
resources or to high-performance applications is challenging. Some of the
inherent costs of BayNNs can be reduced by accelerating them in hardware on a
Computation-In-Memory (CIM) architecture with spintronic memories and
binarizing their parameters. However, numerous stochastic units are required to
implement conventional dropout-based BayNN. In this paper, we propose the Scale
Dropout, a novel regularization technique for Binary Neural Networks (BNNs),
and Monte Carlo-Scale Dropout (MC-Scale Dropout)-based BayNNs for efficient
uncertainty estimation. Our approach requires only one stochastic unit for the
entire model, irrespective of the model size, leading to a highly scalable
Bayesian NN. Furthermore, we introduce a novel Spintronic memory-based CIM
architecture for the proposed BayNN that achieves more than 100× energy
savings compared to the state-of-the-art. We validated our method to show up to
a 1% improvement in predictive performance and superior uncertainty
estimates compared to related works.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined