Normalized Label Distribution: Towards Learning Calibrated, Adaptable and Efficient Activation Maps

CoRR(2020)

引用 0|浏览21
暂无评分
摘要
The vulnerability of models to data aberrations and adversarial attacks influences their ability to demarcate distinct class boundaries efficiently. The network's confidence and uncertainty play a pivotal role in weight adjustments and the extent of acknowledging such attacks. In this paper, we address the trade-off between the accuracy and calibration potential of a classification network. We study the significance of ground-truth distribution changes on the performance and generalizability of various state-of-the-art networks and compare the proposed method's response to unanticipated attacks. Furthermore, we demonstrate the role of label-smoothing regularization and normalization in yielding better generalizability and calibrated probability distribution by proposing normalized soft labels to enhance the calibration of feature maps. Subsequently, we substantiate our inference by translating conventional convolutions to padding based partial convolution to establish the tangible impact of corrections in reinforcing the performance and convergence rate. We graphically elucidate the implication of such variations with the critical purpose of corroborating the reliability and reproducibility for multiple datasets.
更多
查看译文
关键词
efficient activation maps,label distribution,learning calibrated
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要